title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 11. Jakarta Transactions | Chapter 11. Jakarta Transactions 11.1. Overview 11.1.1. Overview of Jakarta Transactions Introduction This section provides a foundational understanding of the Jakarta Transactions. About Jakarta Transactions Transaction Lifecycle Jakarta Transactions Transaction Example 11.2. Transaction Concepts 11.2.1. About Transactions A transaction consists of two or more actions, which must either all succeed or all fail. A successful outcome is a commit, and a failed outcome is a rollback. In a rollback, each member's state is reverted to its state before the transaction attempted to commit. The typical standard for a well-designed transaction is that it is Atomic, Consistent, Isolated, and Durable (ACID). 11.2.2. About ACID Properties for Transactions ACID is an acronym which stands for Atomicity , Consistency , Isolation , and Durability . This terminology is usually used in the context of databases or transactional operations. Atomicity For a transaction to be atomic, all transaction members must make the same decision. Either they all commit, or they all roll back. If atomicity is broken, what results is termed a heuristic outcome. Consistency Consistency means that data written to the database is guaranteed to be valid data, in terms of the database schema. The database or other data source must always be in a consistent state. One example of an inconsistent state would be a field in which half of the data is written before an operation aborts. A consistent state would be if all the data were written, or the write were rolled back when it could not be completed. Isolation Isolation means that data being operated on by a transaction must be locked before modification, to prevent processes outside the scope of the transaction from modifying the data. Durability Durability means that in the event of an external failure after transaction members have been instructed to commit, all members will be able to continue committing the transaction when the failure is resolved. This failure can be related to hardware, software, network, or any other involved system. 11.2.3. About the Transaction Coordinator or Transaction Manager The terms Transaction Coordinator and Transaction Manager TM are mostly interchangeable in terms of transactions with JBoss EAP. The term Transaction Coordinator is usually used in the context of distributed JTS transactions. In Jakarta Transactions transactions, the TM runs within JBoss EAP and communicates with transaction participants during the two-phase commit protocol. The TM tells transaction participants whether to commit or roll back their data, depending on the outcome of other transaction participants. In this way, it ensures that transactions adhere to the ACID standard. About Transaction Participants About ACID Properties for Transactions About the 2-Phase Commit Protocol 11.2.4. About Transaction Participants A transaction participant is any resource within a transaction that has the ability to commit or to roll back state. It is generally a database or a Jakarta Messaging broker, but by implementing the transaction interface, application code could also act as a transaction participant. Each participant of a transaction independently decides whether it is able to commit or roll back its state, and only if all participants can commit does the transaction as a whole succeed. Otherwise, each participant rolls back its state, and the transaction as a whole fails. The TM coordinates the commit or rollback operations and determines the outcome of the transaction. 11.2.5. About Jakarta Transactions Jakarta Transactions is part of Jakarta EE Spec. It is defined in Jakarta Transactions 1.3 Specification . Implementation of Jakarta Transactions is done using the TM, which is covered by project Narayana for JBoss EAP application server. The TM allows applications to assign various resources, for example, database or Jakarta Messaging brokers, through a single global transaction. The global transaction is referred as an XA transaction. Generally resources with XA capabilities are included in such transactions, but non-XA resources could also be part of global transactions. There are several optimizations which help non-XA resources to behave as XA capable resources. For more information, see LRCO Optimization for Single-phase Commit . In this document, the term Jakarta Transactions refers to two things: The Jakarta Transactions, which is defined by Jakarta EE specification. It indicates how the TM processes the transactions. The TM works in Jakarta Transactions transactions mode, the data is shared in memory, and the transaction context is transferred by remote Jakarta Enterprise Beans calls. In the manage transactions mode, the data is shared by sending Common Object Request Broker Architecture (CORBA) messages and the transaction context is transferred by IIOP calls. Both modes support distribution of transactions over multiple JBoss EAP servers. About Distributed Transactions About XA Datasources and XA Transactions 11.2.6. About JTS JTS is a mapping of the Object Transaction Service (OTS) to Jakarta. Jakarta EE applications use the Jakarta Transactions to manage transactions. Jakarta Transactions then interacts with a Object Transactions Service transaction implementation when the transaction manager is switched to JTS mode. JTS works over the IIOP protocol. Transaction managers that use JTS communicate with each other using a process called an Object Request Broker (ORB), using a communication standard called Common Object Request Broker Architecture (CORBA). For more information, see ORB Configuration in the JBoss EAP Configuration Guide . Using the Jakarta Transactions from an application standpoint, a JTS transaction behaves in the same way as a Jakarta Transactions transaction. Note The implementation of JTS included in JBoss EAP supports distributed transactions. The difference from fully-compliant JTS transactions is interoperability with external third-party ORBs. This feature is unsupported with JBoss EAP. Supported configurations distribute transactions across multiple JBoss EAP containers only. 11.2.7. About XML Transaction Service The XML Transaction Service (XTS) component supports the coordination of private and public web services in a business transaction. Using XTS, you can coordinate complex business transactions in a controlled and reliable manner. The XTS API supports a transactional coordination model based on the WS-Coordination, WS-Atomic Transaction, and WS-Business Activity protocols. 11.2.7.1. Overview of Protocols Used by XTS The WS-Coordination (WS-C) specification defines a framework that allows different coordination protocols to be plugged in to coordinate work between clients, services, and participants. The WS-Transaction (WS-T) protocol comprises the pair of transaction coordination protocols, WS-Atomic Transaction (WS-AT) and WS-Business Activity (WS-BA), which utilize the coordination framework provided by WS-C. WS-T is developed to unify existing traditional transaction processing systems, allowing them to communicate reliably with one another. 11.2.7.2. Web Services-Atomic Transaction Process An atomic transaction (AT) is designed to support short duration interactions where ACID semantics are appropriate. Within the scope of an AT, web services typically employ bridging to access XA resources, such as databases and message queues, under the control of the WS-T. When the transaction terminates, the participant propagates the outcome decision of the AT to the XA resources, and the appropriate commit or rollback actions are taken by each participant. 11.2.7.2.1. Atomic Transaction Process To initiate an AT, the client application first locates a WS-C Activation Coordinator web service that supports WS-T. The client sends a WS-C CreateCoordinationContext message to the service, specifying http://schemas.xmlsoap.org/ws/2004/10/wsat as its coordination type. The client receives an appropriate WS-T context from the activation service. The response to the CreateCoordinationContext message, the transaction context, has its CoordinationType element set to the WS-AT namespace, http://schemas.xmlsoap.org/ws/2004/10/wsat . It also contains a reference to the atomic transaction coordinator endpoint, the WS-C Registration Service, where participants can be enlisted. The client normally proceeds to invoke web services and complete the transaction, either committing all the changes made by the web services, or rolling them back. In order to be able to drive this completion, the client must register itself as a participant for the completion protocol, by sending a register message to the registration service whose endpoint was returned in the coordination context. Once registered for completion, the client application then interacts with web services to accomplish its business-level work. With each invocation of a business web service, the client inserts the transaction context into a SOAP header block, such that each invocation is implicitly scoped by the transaction. The toolkits that support WS-AT aware web services provide facilities to correlate contexts found in SOAP header blocks with back-end operations. This ensures that modifications made by the web service are done within the scope of the same transaction as the client and subject to commit or rollback by the Transaction Coordinator. Once all the necessary application work is complete, the client can terminate the transaction, with the intent of making any changes to the service state permanent. The completion participant instructs the coordinator to try to commit or roll back the transaction. When the commit or rollback operation completes, a status is returned to the participant to indicate the outcome of the transaction. For more details, see WS-Coordination in the Naryana Project Documentation. 11.2.7.2.2. WS-AT Interoperability with Microsoft .NET Clients The xts subsystem can have issues communicating with Microsoft .NET clients because of differences in the .NET implementation of the WS-AT specification. The .NET implementation of the WS-AT specification forces any call to be asynchronous. To enable interoperability with .NET clients, an asynchronous registration option is available in the JBoss EAP xts subsystem. XTS asynchronous registration is disabled by default, and you should only enable it if necessary. To enable asynchronous registration for WS-AT interoperability with .NET clients, use the following management CLI command: 11.2.7.3. Web Services-Business Activity Process Web Services-Business Activity (WS-BA) defines a protocol for web service applications to enable existing business processing and workflow systems to wrap their proprietary mechanisms and interoperate across implementations and business boundaries. Unlike the WS-AT protocol model, where participants inform the transaction coordinator of their state only when asked, a child activity within a WS-BA can specify its outcome to the coordinator directly, without waiting for a request. A participant can choose to exit the activity or notify the coordinator of a failure at any point. This feature is useful when tasks fail because the notification can be used to modify the goals and drive processing forward, without waiting until the end of the transaction to identify failures. 11.2.7.3.1. WS-BA Process Services are requested to do work. Wherever these services have the ability to undo any work, they inform the WS-BA, in case the WS-BA later decides the cancel the work. If the WS-BA suffers a failure. it can instruct the service to execute its undo behavior. The WS-BA protocols employ a compensation-based transaction model. When a participant in a business activity completes its work, it can choose to exit the activity. This choice does not allow any subsequent rollback. Alternatively, the participant can complete its activity, signaling to the coordinator that the work it has done can be compensated if, at some later point, another participant notifies a failure to the coordinator. In this latter case, the coordinator asks each non-exited participant to compensate for the failure, giving them the opportunity to execute whatever compensating action they consider appropriate. If all participants exit or complete without failure, the coordinator notifies each completed participant that the activity has been closed. For more details, see WS-Coordination in the Naryana Project Documentation. 11.2.7.4. Transaction Bridging Overview Transaction Bridging describes the process of linking the Jakarta EE and WS-T domains. The transaction bridge component, txbridge , provides bi-directional linkage, such that either type of transaction can encompass business logic designed for use with the other type. The technique used by the bridge is a combination of interposition and protocol mapping. In the transaction bridge, an interposed coordinator is registered into the existing transaction and performs the additional task of protocol mapping; that is, it appears to its parent coordinator to be a resource of its native transaction type, while appearing to its children to be a coordinator of their native transaction type, even though these transaction types differ. The transaction bridge resides in the package org.jboss.jbossts.txbridge and its subpackages. It consists of two distinct sets of classes, one for bridging in each direction. For more details, see TXBridge Guide in the Naryana Project Documentation. 11.2.8. About XA Resources and XA Transactions XA stands for eXtended Architecture, which was developed by the X/Open Group to define a transaction that uses more than one back-end data store. The XA standard describes the interface between a global TM and a local resource manager. XA allows multiple resources, such as application servers, databases, caches, and message queues, to participate in the same transaction, while preserving all four ACID properties. One of the four ACID properties is atomicity, which means that if one of the participants fails to commit its changes, the other participants abort the transaction, and restore their state to the same status as before the transaction occurred. An XA resource is a resource that can participate in an XA global transaction. An XA transaction is a transaction that can span multiple resources. It involves a coordinating TM, with one or more databases or other transactional resources, all involved in a single global XA transaction. 11.2.9. About XA Recovery TM implements X/Open XA specification and supports XA transactions across multiple XA resources. XA Recovery is the process of ensuring that all resources affected by a transaction are updated or rolled back, even if any of the resources that are transaction participants crash or become unavailable. Within the scope of JBoss EAP, the transactions subsystem provides the mechanisms for XA Recovery to any XA resources or subsystems that use them, such as XA datasources, Jakarta Messaging message queues, and Jakarta Connectors resource adapters. XA Recovery happens without user intervention. In the event of an XA Recovery failure, errors are recorded in the log output. Contact Red Hat Global Support Services if you need assistance. The XA recovery process is driven by a periodic recovery thread which is launched by default every two minutes. The periodic recovery thread processes all unfinished transactions. Note It can take four to eight minutes to complete the recovery for an in-doubt transaction because it might require multiple runs of the recovery process. 11.2.10. Limitations of the XA Recovery Process XA recovery has the following limitations: The transaction log might not be cleared from a successfully committed transaction. If the JBoss EAP server crashes after an XAResource commit method successfully completes and commits the transaction, but before the coordinator can update the log, you might see the following warning message in the log when you restart the server: This is because upon recovery, the JBoss Transaction Manager (TM) sees the transaction participants in the log and attempts to retry the commit. Eventually the JBoss TM assumes the resources are committed and no longer retries the commit. In this situation, you can safely ignore this warning as the transaction is committed and there is no loss of data. To prevent the warning, set the com.arjuna.ats.jta.xaAssumeRecoveryComplete property value to true . This property is checked whenever a new XAResource instance cannot be located from any registered XAResourceRecovery instance. When set to true , the recovery assumes that a commit attempt succeeded and the instance can be removed from the log with no further recovery attempts. This property must be used with care because it is global and when used incorrectly could result in XAResource instances remaining in an uncommitted state. Note JBoss EAP 7.4 has an implemented enhancement to clear transaction logs after a successfully committed transaction and the above situation should not occur frequently. Rollback is not called for JTS transaction when a server crashes at the end of XAResource.prepare() . If the JBoss EAP server crashes after the completion of an XAResource.prepare() method call, all of the participating XAResource instances are locked in the prepared state and remain that way upon server restart. The transaction is not rolled back and the resources remain locked until the transaction times out or a database administrator manually rolls back the resources and clears the transaction log. For more information, see https://issues.jboss.org/browse/JBTM-2124 Periodic recovery can occur on committed transactions. When the server is under excessive load, the server log might contain the following warning message, followed by a stacktrace: Under heavy load, the processing time taken by a transaction can overlap with the timing of the periodic recovery process's activity. The periodic recovery process detects the transaction still in progress and attempts to initiate a rollback but in fact the transaction continues to completion. At the time the periodic recovery attempts but fails the rollback, it records the rollback failure in the server log. The underlying cause of this issue will be addressed in a future release, but in the meantime a workaround is available. Increase the interval between the two phases of the recovery process by setting the com.arjuna.ats.jta.orphanSafetyInterval property to a value higher than the default value of 10000 milliseconds. A value of 40000 milliseconds is recommended. Note that this does not solve the issue. Instead it decreases the probability that it will occur and that the warning message will be shown in the log. For more information, see https://developer.jboss.org/thread/266729 11.2.11. About the 2-Phase Commit Protocol The two-phase commit (2PC) protocol refers to an algorithm to determine the outcome of a transaction. 2PC is driven by the Transaction Manager (TM) as a process of finishing XA transactions. Phase 1: Prepare In the first phase, the transaction participants notify the transaction coordinator whether they are able to commit the transaction or must roll back. Phase 2: Commit In the second phase, the transaction coordinator makes the decision about whether the overall transaction should commit or roll back. If any one of the participants cannot commit, the transaction must roll back. Otherwise, the transaction can commit. The coordinator directs the resources about what to do, and they notify the coordinator when they have done it. At that point, the transaction is finished. 11.2.12. About Transaction Timeouts In order to preserve atomicity and adhere to the ACID standard for transactions, some parts of a transaction can be long-running. Transaction participants need to lock an XA resource that is part of database table or message in a queue when they commit. The TM needs to wait to hear back from each transaction participant before it can direct them all whether to commit or roll back. Hardware or network failures can cause resources to be locked indefinitely. Transaction timeouts can be associated with transactions in order to control their lifecycle. If a timeout threshold passes before the transaction commits or rolls back, the timeout causes the transaction to be rolled back automatically. You can configure default timeout values for the entire transaction subsystem, or you can disable default timeout values and specify timeouts on a per-transaction basis. 11.2.13. About Distributed Transactions A distributed transaction is a transaction with participants on multiple JBoss EAP servers. JTS specification mandates that JTS transactions be able to be distributed across application servers from different vendors. The Jakarta Transactions does not define that but JBoss EAP supports distributed Jakarta Transactions transactions among JBoss EAP servers. Note Transaction distribution among servers from different vendors is not supported. Note In other application server vendor documentation, you might find that the term distributed transaction means XA transaction. In the context of JBoss EAP documentation, the distributed transaction refers to transactions distributed among several JBoss EAP application servers. Transactions that consist of different resources, for example, database resources and Jakarta Messaging resources, are referred as XA transactions in this document. For more information, see About JTS and About XA Datasources and XA Transactions . 11.2.14. About the ORB Portability API The Object Request Broker (ORB) is a process that sends and receives messages to transaction participants, coordinators, resources, and other services distributed across multiple application servers. An ORB uses a standardized Interface Description Language (IDL) to communicate and interpret messages. Common Object Request Broker Architecture (CORBA) is the IDL used by the ORB in JBoss EAP. The main type of service that uses an ORB is a system of distributed Jakarta Transactions, using the JTS specification. Other systems, especially legacy systems, can choose to use an ORB for communication rather than other mechanisms such as remote Jakarta Enterprise Beans or Jakarta Enterprise Web Services or Jakarta RESTful Web Services. The ORB Portability API provides mechanisms to interact with an ORB. This API provides methods for obtaining a reference to the ORB, as well as placing an application into a mode where it listens for incoming connections from an ORB. Some of the methods in the API are not supported by all ORBs. In those cases, an exception is thrown. The API consists of two different classes: com.arjuna.orbportability.orb com.arjuna.orbportability.oa See the JBoss EAP Javadocs bundle available on the Red Hat Customer Portal for specific details about the methods and properties included in the ORB Portability API. 11.3. Transaction Optimizations 11.3.1. Overview of Transaction Optimizations The Transaction Manager (TM) of JBoss EAP includes several optimizations that your application can take advantage of. Optimizations serve to enhance the 2-phase commit protocol in particular cases. Generally, the TM starts a global transaction, which passes through the 2-phase commit. But when you optimize these transactions, in certain cases, the TM does not need to proceed with full 2-phased commits and thus the process gets faster. Different optimizations used by the TM are described in detail below. About the LRCO Optimization for Single-phase Commit (1PC) About the Presumed-Abort Optimization About the Read-Only Optimization 11.3.2. About the LRCO Optimization for Single-phase Commit (1PC) Single-phase Commit (1PC) Although the 2-phase commit protocol (2PC) is more commonly encountered with transactions, some situations do not require, or cannot accommodate, both phases. In these cases, you can use the single phase commit (1PC) protocol. The single phase commnit protocol is used when only one XA or non-XA resource is a part of the global transaction. The prepare phase generally locks the resource until the second phase is processed. Single-phase commit means that the prepare phase is skipped and only the commit is processed on the resource. If not specified, the single-phase commit optimization is used automatically when the global transaction contains only one participant. Last Resource Commit Optimization (LRCO) In situations where non-XA datasource participate in XA transaction, an optimization known as the Last Resource Commit Optimization (LRCO) is employed. While this protocol allows for most transactions to complete normally, certain types of error can cause an inconsistent transaction outcome. Therefore, use this approach only as a last resort. The non-XA resource is processed at the end of the prepare phase, and an attempt is made to commit it. If the commit succeeds, the transaction log is written and the remaining resources go through the commit phase. If the last resource fails to commit, the transaction is rolled back. Where a single local TX datasource is used in a transaction, the LRCO is automatically applied to it. Previously, adding non-XA resources to an XA transaction was achieved via the LRCO method. However, there is a window of failure in LRCO. The procedure for adding non-XA resources to an XA transaction using the LRCO method is as follows: Prepare the XA transaction. Commit LRCO. Write the transaction log. Commit the XA transaction. If the procedure crashes between step 2 and step 3, this could lead to data inconsistency and you cannot commit the XA transaction. The data inconsistency is because the LRCO non-XA resource is committed but information about preparation of XA resource was not recorded. The recovery manager will rollback the resource after the server is up. Commit Markable Resource (CMR) eliminates this restriction and allows a non-XA resource to be reliably enlisted in an XA transaction. Note CMR is a special case of LRCO optimization that should only be used for datasources. It is not suitable for all non-XA resources. About the 2-Phase Commit Protocol 11.3.2.1. Commit Markable Resource Summary Configuring access to a resource manager using the Commit Markable Resource (CMR) interface ensures that a non-XA datasource can be reliably enlisted in an XA (2PC) transaction. It is an implementation of the LRCO algorithm, which makes non-XA resource fully recoverable. To configure the CMR, you must: Create tables in a database. Enable the datasource to be connectable. Add a reference to transactions subsystem. Create Tables in Database A transaction can contain only one CMR resource. You can create a table using SQL similar to the following example. The following are examples of the SQL syntax to create tables for various database management systems. Example: Sybase Create Table Syntax Example: Oracle Create Table Syntax Example: IBM Create Table Syntax Example: SQL Server Create Table Syntax Example: PostgreSQL Create Table Syntax Example: MariaDB Create Table Syntax Example: MySQL Create Table Syntax Enabling Datasource to be Connectable By default, the CMR feature is disabled for datasources. To enable it, you must create or modify the datasource configuration and ensure that the connectable attribute is set to true . The following is an example of the datasources section of a server XML configuration file: Note This feature is not applicable to XA datasources. You can also enable a resource manager as a CMR, using the management CLI, as follows: This command generates the following XML in the datasources section of the server configuration file. <datasource jta="true" jndi-name="java:jboss/datasources/ConnectableDS" pool-name="ConnectableDS" enabled="true" use-java-context="true" connectable="true"> <connection-url>validConnectionURL</connection-url> <driver>mssql</driver> <validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter"/> </validation> </datasource> Note The datasource must have a valid driver defined. The example above uses mssql as the driver-name ; however the mssql driver does not exist. For details, see Example MySQL Datasource in the JBoss EAP Configuration Guide . Note Use the exception-sorter-class-name parameter in the datasource configuration. For details, see Example Datasource Configurations in the JBoss EAP Configuration Guide . Updating an Existing Resource to Use the New CMR Feature If you only need to update an existing datasource to use the CMR feature, then simply modify the connectable attribute: Add a Reference to the Transactions Subsystem The transactions subsystem identifies the datasources that are CMR capable through an entry to the transactions subsystem configuration section as shown below: <subsystem xmlns="urn:jboss:domain:transactions:5.0"> ... <commit-markable-resources> <commit-markable-resource jndi-name="java:jboss/datasources/ConnectableDS"> <xid-location name="xids" batch-size="100" immediate-cleanup="false"/> </commit-markable-resource> ... </commit-markable-resources> </subsystem> The same result can be achieved using the management CLI: Note You must restart the server after adding the CMR reference under the transactions subsystem. 11.3.3. About the Presumed-Abort Optimization If a transaction is going to roll back, it can record this information locally and notify all enlisted participants. This notification is only a courtesy, and has no effect on the transaction outcome. After all participants have been contacted, the information about the transaction can be removed. If a subsequent request for the status of the transaction occurs there will be no information available. In this case, the requester assumes that the transaction has aborted and rolled back. This presumed-abort optimization means that no information about participants needs to be made persistent until the transaction has decided to commit, since any failure prior to this point will be assumed to be an abort of the transaction. 11.3.4. About the Read-Only Optimization When a participant is asked to prepare, it can indicate to the coordinator that it has not modified any data during the transaction. Such a participant does not need to be informed about the outcome of the transaction, since the fate of the participant has no affect on the transaction. This read-only participant can be omitted from the second phase of the commit protocol. 11.4. Transaction Outcomes 11.4.1. About Transaction Outcomes There are three possible outcomes for a transaction. Commit If every transaction participant can commit, the transaction coordinator directs them to do so. See About Transaction Commit for more information. Rollback If any transaction participant cannot commit, or if the transaction coordinator cannot direct participants to commit, the transaction is rolled back. See About Transaction Rollback for more information. Heuristic outcome If some transaction participants commit and others roll back, it is termed a heuristic outcome. Heuristic outcomes require human intervention. See About Heuristic Outcomes for more information. 11.4.2. About Transaction Commit When a transaction participant commits, it makes its new state durable. The new state is created by the participant doing the work involved in the transaction. The most common example is when a transaction member writes records to a database. After a commit, information about the transaction is removed from the transaction coordinator, and the newly-written state is now the durable state. 11.4.3. About Transaction Rollback A transaction participant rolls back by restoring its state to reflect the state before the transaction began. After a rollback, the state is the same as if the transaction had never been started. 11.4.4. About Heuristic Outcomes A heuristic outcome, or non-atomic outcome, is a situation where the decisions of the participants in a transaction differ from that of the transaction manager. Heuristic outcomes can cause loss of integrity to the system, and usually require human intervention to resolve them. Do not write code which relies on them. Heuristic outcomes typically occur during the second phase of the 2-phase commit (2PC) protocol. In rare cases, this outcome might occur in a 1PC. They are often caused by failures to the underlying hardware or communications subsystems of the underlying servers. Heuristic outcomes are possible due to timeouts in various subsystems or resources even with transaction manager and full crash recovery. In any system that requires some form of distributed agreement, situations can arise where some parts of the system diverge in terms of the global outcome. There are four different types of heuristic outcomes: Heuristic rollback The commit operation was not able to commit the resources but all of the participants were able to be rolled back and so an atomic outcome was still achieved. Heuristic commit An attempted rollback operation failed because all of the participants unilaterally committed. This can happen if, for example, the coordinator is able to successfully prepare the transaction but then decides to roll it back because of a failure on its side, such as a failure to update its log. In the interim, the participants might decide to commit. Heuristic mixed Some participants committed and others rolled back. Heuristic hazard The disposition of some of the updates is unknown. For those that are known, they have either all been committed or all rolled back. About the 2-Phase Commit Protocol 11.4.5. JBoss Transactions Errors and Exceptions For details about exceptions thrown by methods of the UserTransaction class, see the UserTransaction API Javadoc. 11.5. Overview of the Transaction Lifecycle 11.5.1. Transaction Lifecycle See About Jakarta Transactions for more information on Jakarta Transactions. When a resource asks to participate in a transaction, a chain of events is set in motion. The Transaction Manager (TM) is a process that lives within the application server and manages transactions. Transaction participants are objects which participate in a transaction. Resources are datasources, Jakarta Messaging connection factories, or other Jakarta Connectors connections. The application starts a new transaction. To begin a transaction, the application obtains an instance of class UserTransaction from Java Naming and Directory Interface or, if it is a Jakarta Enterprise Beans, from an annotation. The UserTransaction interface includes methods for beginning, committing, and rolling back top-level transactions. Newly created transactions are automatically associated with their invoking thread. Nested transactions are not supported in Jakarta Transactions, so all transactions are top-level transactions. A Jakarta Enterprise Beans starts a transaction when the UserTransaction.begin() method is called. The default behavior of this transaction could be affected by use of the TransactionAttribute annotation or the ejb.xml descriptor. Any resource that is used after that point is associated with the transaction. If more than one resource is enlisted, the transaction becomes an XA transaction, and participates in the two-phase commit protocol at commit time. Note By default, transactions are driven by application containers in Jakarta Enterprise Beanss. This is called Container Managed Transaction (CMT) . To make the transaction user driven, change the Transaction Management to Bean Managed Transaction (BMT) . In BMT, the UserTransaction object is available for the user to manage the transaction. The application modifies its state. In the step, the application performs its work and makes changes to its state, only on enlisted resources. The application decides to commit or roll back. When the application has finished changing its state, it decides whether to commit or roll back. It calls the appropriate method, either UserTransaction.commit() or UserTransaction.rollback() . For a CMT, this process is driven automatically, whereas for a BMT, a method commit or rollback of the UserTransaction has to be explicitly called. The TM removes the transaction from its records. After the commit or rollback completes, the TM cleans up its records and removes information about the transaction from the transaction log. Failure Recovery If a resource, transaction participant, or the application server crashes or become unavailable, the Transaction Manager handles recovery when the underlying failure is resolved and the resource is available again. This process happens automatically. For more information, see XA Recovery . 11.6. Transaction Subsystem Configuration The transactions subsystem allows you to configure transaction manager options such as statistics, timeout values, and transaction logging. You can also manage transactions and view transaction statistics. For more information, see Configuring Transactions in the JBoss EAP Configuration Guide . 11.7. Transactions Usage In Practice 11.7.1. Transactions Usage Overview The following procedures are useful when you need to use transactions in your application. Control Transactions Begin a Transaction Commit a Transaction Roll Back a Transaction Handle a Heuristic Outcome in a Transaction Handle Transaction Errors Transaction References 11.7.2. Control Transactions Introduction This list of procedures outlines the different ways to control transactions in your applications which use Jakarta Transactions APIs. Begin a Transaction Commit a Transaction Roll Back a Transaction 11.7.2.1. Begin a Transaction This procedure shows how to begin a new transaction. The API is the same whether you run the Transaction Manager TM configured with Jakarta Transactions or JTS. Get an instance of UserTransaction . You can get the instance using Java Naming and Directory Interface, injection, or a Jakarta Enterprise Beans context if the Jakarta Enterprise Beans uses bean-managed transactions by means of a @TransactionManagement(TransactionManagementType.BEAN) annotation. Get the instance using Java Naming and Directory Interface. new InitialContext().lookup("java:comp/UserTransaction") Get the instance using injection. @Resource UserTransaction userTransaction; Get the instance using the Jakarta Enterprise Beans context. In a stateless/stateful bean: @Resource SessionContext ctx; ctx.getUserTransaction(); In a message-driven bean: @Resource MessageDrivenContext ctx; ctx.getUserTransaction() Call UserTransaction.begin() after you connect to your datasource. try { System.out.println("\nCreating connection to database: "+url); stmt = conn.createStatement(); // non-tx statement try { System.out.println("Starting top-level transaction."); userTransaction.begin(); stmtx = conn.createStatement(); // will be a tx-statement ... } } Result The transaction begins. All uses of your datasource are transactional until you commit or roll back the transaction. For a full example, see Jakarta Transactions Transaction Example . Note One of the benefits of Jakarta Enterprise Beans (either used with CMT or BMT) is that the container manages all the internals of the transactional processing, that is, you are free from taking care of transaction being part of XA transaction or transaction distribution amongst JBoss EAP containers. 11.7.2.1.1. Nested Transactions Nested transactions allow an application to create a transaction that is embedded in an existing transaction. In this model, multiple subtransactions can be embedded recursively in a transaction. Subtransactions can be committed or rolled back without committing or rolling back the parent transaction. However, the results of a commit operation are contingent upon the commitment of all the transaction's ancestors. For implementation specific information, see the Narayana Project Documentation . Nested transactions are available only when used with the JTS specification. Nested transactions are not a supported feature of JBoss EAP application server. In addition, many database vendors do not support nested transactions, so consult your database vendor before you add nested transactions to your application. 11.7.2.2. Commit a Transaction This procedure shows how to commit a transaction using the Jakarta Transactions. Prerequisites You must begin a transaction before you can commit it. For information on how to begin a transaction, see Begin a Transaction . Call the commit() method on the UserTransaction . When you call the commit() method on the UserTransaction , the TM attempts to commit the transaction. @Inject private UserTransaction userTransaction; public void updateTable(String key, String value) { EntityManager entityManager = entityManagerFactory.createEntityManager(); try { userTransaction.begin(); <!-- Perform some data manipulation using entityManager --> ... // Commit the transaction userTransaction.commit(); } catch (Exception ex) { <!-- Log message or notify Web page --> ... try { userTransaction.rollback(); } catch (SystemException se) { throw new RuntimeException(se); } throw new RuntimeException(ex); } finally { entityManager.close(); } } If you use Container Managed Transactions (CMT), you do not need to manually commit. If you configure your bean to use Container Managed Transactions, the container will manage the transaction lifecycle for you based on annotations you configure in the code. @PersistenceContext private EntityManager em; @TransactionAttribute(TransactionAttributeType.REQUIRED) public void updateTable(String key, String value) <!-- Perform some data manipulation using entityManager --> ... } Result Your datasource commits and your transaction ends, or an exception is thrown. Note For a full example, see Jakarta Transactions Transaction Example . 11.7.2.3. Roll Back a Transaction This procedure shows how to roll back a transaction using the Jakarta Transactions. Prerequisites You must begin a transaction before you can roll it back. For information on how to begin a transaction, see Begin a Transaction . Call the rollback() method on the UserTransaction . When you call the rollback() method on the UserTransaction , the TM attempts to roll back the transaction and return the data to its state. @Inject private UserTransaction userTransaction; public void updateTable(String key, String value) EntityManager entityManager = entityManagerFactory.createEntityManager(); try { userTransaction.begin(): <!-- Perform some data manipulation using entityManager --> ... // Commit the transaction userTransaction.commit(); } catch (Exception ex) { <!-- Log message or notify Web page --> ... try { userTransaction.rollback(); } catch (SystemException se) { throw new RuntimeException(se); } throw new RuntimeException(e); } finally { entityManager.close(); } } If you use Container Managed Transactions (CMT), you do not need to manually roll back the transaction. If you configure your bean to use Container Managed Transactions, the container will manage the transaction lifecycle for you based on annotations you configure in the code. Note Rollback for CMT occurs if RuntimeException is thrown. You can also explicitly call the setRollbackOnly method to gain the rollback. Or, use the @ApplicationException(rollback=true) for application exception to rollback. Result Your transaction is rolled back by the TM. Note For a full example, see Jakarta Transactions Transaction Example . 11.7.3. Handle a Heuristic Outcome in a Transaction Heuristic transaction outcomes are uncommon and usually have exceptional causes. The word heuristic means "by hand", and that is the way that these outcomes usually have to be handled. See About Heuristic Outcomes for more information about heuristic transaction outcomes. This procedure shows how to handle a heuristic outcome of a transaction using the Jakarta Transactions. The cause of a heuristic outcome in a transaction is that a resource manager promised it could commit or rollback, and then failed to fulfill the promise. This could be due to a problem with a third-party component, the integration layer between the third-party component and JBoss EAP, or JBoss EAP itself. By far, the most common two causes of heuristic errors are transient failures in the environment and coding errors dealing with resource managers. Usually, if there is a transient failure in your environment, you will know about it before you find out about the heuristic error. This could be due to a network outage, hardware failure, database failure, power outage, or a host of other things. If you come across a heuristic outcome in a test environment during stress testing, it implies weaknesses in your test environment. Warning JBoss EAP automatically recovers transactions that were in a non-heuristic state at the time of failure, but it does not attempt to recover the heuristic transactions. If you have no obvious failure in your environment, or if the heuristic outcome is easily reproducible, it is probably due to a coding error. You must contact the third-party vendors to find out if a solution is available. If you suspect the problem is in the transaction manager of JBoss EAP itself, you must raise a support ticket. You can attempt to recover the transaction manually using the management CLI. For more information, see the Recovering a Transaction Participant section of Managing Transactions on JBoss EAP . The process of resolving the transaction outcome manually is dependent on the exact circumstance of the failure. Perform the following steps, as applicable to your environment: Identify which resource managers were involved. Examine the state of the transaction manager and the resource managers. Manually force log cleanup and data reconciliation in one or more of the involved components. In a test environment, or if you do not care about the integrity of the data, deleting the transaction logs and restarting JBoss EAP gets rid of the heuristic outcome. By default, the transaction logs are located in the EAP_HOME /standalone/data/tx-object-store/ directory for a standalone server, or the EAP_HOME /domain/servers/ SERVER_NAME /data/tx-object-store/ directory in a managed domain. In the case of a managed domain, SERVER_NAME refers to the name of the individual server participating in a server group. Note The location of the transaction log also depends on the object store in use and the values set for the object-store-relative-to and object-store-path parameters. For file system logs, such as a standard shadow and Apache ActiveMQ Artemis logs, the default directory location is used, but when using a JDBC object store, the transaction logs are stored in a database. 11.7.4. Jakarta Transactions Transaction Error Handling 11.7.4.1. Handle Transaction Errors Transaction errors are challenging to solve because they are often dependent on timing. Here are some common errors and ideas for troubleshooting them. Note These guidelines do not apply to heuristic errors. If you experience heuristic errors, refer to Handle a Heuristic Outcome in a Transaction and contact Red Hat Global Support Services for assistance. The transaction timed out but the business logic thread did not notice This type of error often manifests itself when Hibernate is unable to obtain a database connection for lazy loading. If it happens frequently, you can lengthen the timeout value. See the JBoss EAP Configuration Guide for information on configuring the transaction manager . If that is not feasible, you might be able to tune your external environment to perform more quickly, or restructure your code to be more efficient. Contact Red Hat Global Support Services if you still have trouble with timeouts. The transaction is already running on a thread, or you receive a NotSupportedException exception The NotSupportedException exception usually indicates that you attempted to nest a Jakarta Transactions transaction, and this is not supported. If you were not attempting to nest a transaction, it is likely that another transaction was started in a thread pool task, but finished the task without suspending or ending the transaction. Applications typically use UserTransaction , which handles this automatically. If so, there might be a problem with a framework. If your code does use TransactionManager or Transaction methods directly, be aware of the following behavior when committing or rolling back a transaction. If your code uses TransactionManager methods to control your transactions, committing or rolling back a transaction disassociates the transaction from the current thread. However, if your code uses Transaction methods, the transaction might not be associated with the running thread, and you need to disassociate it from its threads manually, before returning it to the thread pool. You are unable to enlist a second local resource This error happens if you try to enlist a second non-XA resource into a transaction. If you need multiple resources in a transaction, they must be XA. 11.8. Transaction References 11.8.1. Transaction Example for Jakarta Transactions This example illustrates how to begin, commit, and roll back a Jakarta Transactions transaction. You need to adjust the connection and datasource parameters to suit your environment, and set up two test tables in your database. public class JDBCExample { public static void main (String[] args) { Context ctx = new InitialContext(); // Change these two lines to suit your environment. DataSource ds = (DataSource)ctx.lookup("jdbc/ExampleDS"); Connection conn = ds.getConnection("testuser", "testpwd"); Statement stmt = null; // Non-transactional statement Statement stmtx = null; // Transactional statement Properties dbProperties = new Properties(); // Get a UserTransaction UserTransaction txn = new InitialContext().lookup("java:comp/UserTransaction"); try { stmt = conn.createStatement(); // non-tx statement // Check the database connection. try { stmt.executeUpdate("DROP TABLE test_table"); stmt.executeUpdate("DROP TABLE test_table2"); } catch (Exception e) { throw new RuntimeException(e); // assume not in database. } try { stmt.executeUpdate("CREATE TABLE test_table (a INTEGER,b INTEGER)"); stmt.executeUpdate("CREATE TABLE test_table2 (a INTEGER,b INTEGER)"); } catch (Exception e) { throw new RuntimeException(e); } try { System.out.println("Starting top-level transaction."); txn.begin(); stmtx = conn.createStatement(); // will be a tx-statement // First, we try to roll back changes System.out.println("\nAdding entries to table 1."); stmtx.executeUpdate("INSERT INTO test_table (a, b) VALUES (1,2)"); ResultSet res1 = null; System.out.println("\nInspecting table 1."); res1 = stmtx.executeQuery("SELECT * FROM test_table"); while (res1.()) { System.out.println("Column 1: "+res1.getInt(1)); System.out.println("Column 2: "+res1.getInt(2)); } System.out.println("\nAdding entries to table 2."); stmtx.executeUpdate("INSERT INTO test_table2 (a, b) VALUES (3,4)"); res1 = stmtx.executeQuery("SELECT * FROM test_table2"); System.out.println("\nInspecting table 2."); while (res1.()) { System.out.println("Column 1: "+res1.getInt(1)); System.out.println("Column 2: "+res1.getInt(2)); } System.out.print("\nNow attempting to rollback changes."); txn.rollback(); // , we try to commit changes txn.begin(); stmtx = conn.createStatement(); System.out.println("\nAdding entries to table 1."); stmtx.executeUpdate("INSERT INTO test_table (a, b) VALUES (1,2)"); ResultSet res2 = null; System.out.println("\nNow checking state of table 1."); res2 = stmtx.executeQuery("SELECT * FROM test_table"); while (res2.()) { System.out.println("Column 1: "+res2.getInt(1)); System.out.println("Column 2: "+res2.getInt(2)); } System.out.println("\nNow checking state of table 2."); stmtx = conn.createStatement(); res2 = stmtx.executeQuery("SELECT * FROM test_table2"); while (res2.()) { System.out.println("Column 1: "+res2.getInt(1)); System.out.println("Column 2: "+res2.getInt(2)); } txn.commit(); } catch (Exception ex) { throw new RuntimeException(ex); } } catch (Exception sysEx) { sysEx.printStackTrace(); System.exit(0); } } } 11.8.2. Transaction API Documentation The transaction Jakarta Transactions API documentation is available as Javadoc at the following location: UserTransaction - https://jakarta.ee/specifications/platform/8/apidocs/javax/transaction/UserTransaction.html If you use Red Hat CodeReady Studio to develop your applications, the API documentation is included in the Help menu. | [
"/subsystem=xts:write-attribute(name=async-registration, value=true)",
"ARJUNA016037: Could not find new XAResource to use for recovering non-serializable XAResource XAResourceRecord",
"ARJUNA016027: Local XARecoveryModule.xaRecovery got XA exception XAException.XAER_NOTA: javax.transaction.xa.XAException",
"SELECT xid,actionuid FROM _tableName_ WHERE transactionManagerID IN (String[]) DELETE FROM _tableName_ WHERE xid IN (byte[[]]) INSERT INTO _tableName_ (xid, transactionManagerID, actionuid) VALUES (byte[],String,byte[])",
"CREATE TABLE xids (xid varbinary(144), transactionManagerID varchar(64), actionuid varbinary(28))",
"CREATE TABLE xids (xid RAW(144), transactionManagerID varchar(64), actionuid RAW(28)) CREATE UNIQUE INDEX index_xid ON xids (xid)",
"CREATE TABLE xids (xid VARCHAR(255) for bit data not null, transactionManagerID varchar(64), actionuid VARCHAR(255) for bit data not null) CREATE UNIQUE INDEX index_xid ON xids (xid)",
"CREATE TABLE xids (xid varbinary(144), transactionManagerID varchar(64), actionuid varbinary(28)) CREATE UNIQUE INDEX index_xid ON xids (xid)",
"CREATE TABLE xids (xid bytea, transactionManagerID varchar(64), actionuid bytea) CREATE UNIQUE INDEX index_xid ON xids (xid)",
"CREATE TABLE xids (xid BINARY(144), transactionManagerID varchar(64), actionuid BINARY(28)) CREATE UNIQUE INDEX index_xid ON xids (xid)",
"CREATE TABLE xids (xid VARCHAR(255), transactionManagerID varchar(64), actionuid VARCHAR(255)) CREATE UNIQUE INDEX index_xid ON xids (xid)",
"<datasource enabled=\"true\" jndi-name=\"java:jboss/datasources/ConnectableDS\" pool-name=\"ConnectableDS\" jta=\"true\" use-java-context=\"true\" connectable=\"true\"/>",
"/subsystem=datasources/data-source=ConnectableDS:add(enabled=\"true\", jndi-name=\"java:jboss/datasources/ConnectableDS\", jta=\"true\", use-java-context=\"true\", connectable=\"true\", connection-url=\"validConnectionURL\", exception-sorter-class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter\", driver-name=\"mssql\")",
"<datasource jta=\"true\" jndi-name=\"java:jboss/datasources/ConnectableDS\" pool-name=\"ConnectableDS\" enabled=\"true\" use-java-context=\"true\" connectable=\"true\"> <connection-url>validConnectionURL</connection-url> <driver>mssql</driver> <validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter\"/> </validation> </datasource>",
"/subsystem=datasources/data-source=ConnectableDS:write-attribute(name=connectable,value=true)",
"<subsystem xmlns=\"urn:jboss:domain:transactions:5.0\"> <commit-markable-resources> <commit-markable-resource jndi-name=\"java:jboss/datasources/ConnectableDS\"> <xid-location name=\"xids\" batch-size=\"100\" immediate-cleanup=\"false\"/> </commit-markable-resource> </commit-markable-resources> </subsystem>",
"/subsystem=transactions/commit-markable-resource=java\\:jboss\\/datasources\\/ConnectableDS/:add(batch-size=100,immediate-cleanup=false,name=xids)",
"new InitialContext().lookup(\"java:comp/UserTransaction\")",
"@Resource UserTransaction userTransaction;",
"@Resource SessionContext ctx; ctx.getUserTransaction();",
"@Resource MessageDrivenContext ctx; ctx.getUserTransaction()",
"try { System.out.println(\"\\nCreating connection to database: \"+url); stmt = conn.createStatement(); // non-tx statement try { System.out.println(\"Starting top-level transaction.\"); userTransaction.begin(); stmtx = conn.createStatement(); // will be a tx-statement } }",
"@Inject private UserTransaction userTransaction; public void updateTable(String key, String value) { EntityManager entityManager = entityManagerFactory.createEntityManager(); try { userTransaction.begin(); <!-- Perform some data manipulation using entityManager --> // Commit the transaction userTransaction.commit(); } catch (Exception ex) { <!-- Log message or notify Web page --> try { userTransaction.rollback(); } catch (SystemException se) { throw new RuntimeException(se); } throw new RuntimeException(ex); } finally { entityManager.close(); } }",
"@PersistenceContext private EntityManager em; @TransactionAttribute(TransactionAttributeType.REQUIRED) public void updateTable(String key, String value) <!-- Perform some data manipulation using entityManager --> }",
"@Inject private UserTransaction userTransaction; public void updateTable(String key, String value) EntityManager entityManager = entityManagerFactory.createEntityManager(); try { userTransaction.begin(): <!-- Perform some data manipulation using entityManager --> // Commit the transaction userTransaction.commit(); } catch (Exception ex) { <!-- Log message or notify Web page --> try { userTransaction.rollback(); } catch (SystemException se) { throw new RuntimeException(se); } throw new RuntimeException(e); } finally { entityManager.close(); } }",
"public class JDBCExample { public static void main (String[] args) { Context ctx = new InitialContext(); // Change these two lines to suit your environment. DataSource ds = (DataSource)ctx.lookup(\"jdbc/ExampleDS\"); Connection conn = ds.getConnection(\"testuser\", \"testpwd\"); Statement stmt = null; // Non-transactional statement Statement stmtx = null; // Transactional statement Properties dbProperties = new Properties(); // Get a UserTransaction UserTransaction txn = new InitialContext().lookup(\"java:comp/UserTransaction\"); try { stmt = conn.createStatement(); // non-tx statement // Check the database connection. try { stmt.executeUpdate(\"DROP TABLE test_table\"); stmt.executeUpdate(\"DROP TABLE test_table2\"); } catch (Exception e) { throw new RuntimeException(e); // assume not in database. } try { stmt.executeUpdate(\"CREATE TABLE test_table (a INTEGER,b INTEGER)\"); stmt.executeUpdate(\"CREATE TABLE test_table2 (a INTEGER,b INTEGER)\"); } catch (Exception e) { throw new RuntimeException(e); } try { System.out.println(\"Starting top-level transaction.\"); txn.begin(); stmtx = conn.createStatement(); // will be a tx-statement // First, we try to roll back changes System.out.println(\"\\nAdding entries to table 1.\"); stmtx.executeUpdate(\"INSERT INTO test_table (a, b) VALUES (1,2)\"); ResultSet res1 = null; System.out.println(\"\\nInspecting table 1.\"); res1 = stmtx.executeQuery(\"SELECT * FROM test_table\"); while (res1.next()) { System.out.println(\"Column 1: \"+res1.getInt(1)); System.out.println(\"Column 2: \"+res1.getInt(2)); } System.out.println(\"\\nAdding entries to table 2.\"); stmtx.executeUpdate(\"INSERT INTO test_table2 (a, b) VALUES (3,4)\"); res1 = stmtx.executeQuery(\"SELECT * FROM test_table2\"); System.out.println(\"\\nInspecting table 2.\"); while (res1.next()) { System.out.println(\"Column 1: \"+res1.getInt(1)); System.out.println(\"Column 2: \"+res1.getInt(2)); } System.out.print(\"\\nNow attempting to rollback changes.\"); txn.rollback(); // Next, we try to commit changes txn.begin(); stmtx = conn.createStatement(); System.out.println(\"\\nAdding entries to table 1.\"); stmtx.executeUpdate(\"INSERT INTO test_table (a, b) VALUES (1,2)\"); ResultSet res2 = null; System.out.println(\"\\nNow checking state of table 1.\"); res2 = stmtx.executeQuery(\"SELECT * FROM test_table\"); while (res2.next()) { System.out.println(\"Column 1: \"+res2.getInt(1)); System.out.println(\"Column 2: \"+res2.getInt(2)); } System.out.println(\"\\nNow checking state of table 2.\"); stmtx = conn.createStatement(); res2 = stmtx.executeQuery(\"SELECT * FROM test_table2\"); while (res2.next()) { System.out.println(\"Column 1: \"+res2.getInt(1)); System.out.println(\"Column 2: \"+res2.getInt(2)); } txn.commit(); } catch (Exception ex) { throw new RuntimeException(ex); } } catch (Exception sysEx) { sysEx.printStackTrace(); System.exit(0); } } }"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/java_transaction_api |
Chapter 4. Installing a cluster on VMC with network customizations | Chapter 4. Installing a cluster on VMC with network customizations In OpenShift Container Platform version 4.13, you can install a cluster on your VMware vSphere instance using installer-provisioned infrastructure with customized network configuration options by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. By customizing your OpenShift Container Platform network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 4.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 4.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 4.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 4.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.4. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 4.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Table 4.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 and later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. CPU micro-architecture x86-64-v2 or higher OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the RHEL Microarchitecture requirements documentation . You can verify compatibility by following the procedures outlined in this KCS article . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources For more information about CSI automatic migration, see "Overview" in VMware vSphere CSI Driver Operator . 4.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 4.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 4.6. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 4.7. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 4.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 4.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 4.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. Using Storage vMotion can cause issues and is not supported. Using VMware compute vMotion to migrate the workloads for both OpenShift Container Platform compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 4.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.13 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 4.10. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 4.11. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature only available on a newly installed cluster. A cluster that was upgraded from a release defaults to using the in-tree vSphere driver, so you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters 4.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Note After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement . Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 4.12.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.12.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.7. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.12.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.8. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.12.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.9. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 4.12.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 4.10. Additional VMware vSphere cluster parameters Parameter Description Values Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. You can only specify one vCenter server for your OpenShift Container Platform cluster. A dictionary of vSphere configuration objects Virtual IP (VIP) addresses that you configured for control plane API access. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Optional: The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . If you define multiple failure domains for your cluster, you must attach the tag to each vCenter datacenter. To define a region, use a tag from the openshift-region tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as datacenter , for the parameter. String Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the openshift-zone tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as cluster , for the parameter. String Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the vcenters field. String Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. String Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String Virtual IP (VIP) addresses that you configured for cluster Ingress. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Configures the connection details so that services can communicate with a vCenter server. Currently, only a single vCenter server is supported. An array of vCenter configuration objects. Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the failureDomains field. String The password associated with the vSphere user. String The port number used to communicate with the vCenter server. Integer The fully qualified host name (FQHN) or IP address of the vCenter server. String The username associated with the vSphere user. String 4.12.1.5. Deprecated VMware vSphere configuration parameters In OpenShift Container Platform 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml file. The following table lists each deprecated vSphere configuration parameter. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 4.11. Deprecated VMware vSphere cluster parameters Parameter Description Values The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . The vCenter cluster to install the OpenShift Container Platform cluster in. String Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. String The name of the default datastore to use for provisioning volumes. String Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . Virtual IP (VIP) addresses that you configured for cluster Ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The password for the vCenter user name. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The fully-qualified hostname or IP address of a vCenter server. String 4.12.1.6. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 4.12. Optional VMware vSphere machine pool parameters Parameter Description Values clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . osDisk.diskSizeGB The size of the disk in gigabytes. Integer cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer memoryMB The size of a virtual machine's memory in megabytes. Integer 4.12.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> datastore: "/<datacenter>/datastore/<datastore>" 8 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 9 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 10 fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 The cluster name that you specified in your DNS records. 6 Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. 7 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 8 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 9 Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 10 The vSphere disk provisioning method. 5 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 4.12.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.12.4. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 4.13. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 4.14. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 4.15. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.15.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.13. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.14. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 4.15. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 4.16. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 4.17. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.18. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 4.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.16. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. When you have configured your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host that is co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.17. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 4.18. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.19. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 4.19.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 4.19.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.19.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 4.19.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 4.20. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 4.21. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.22. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 4.23. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"platform: vsphere:",
"platform: vsphere: apiVIPs:",
"platform: vsphere: diskType:",
"platform: vsphere: failureDomains: region:",
"platform: vsphere: failureDomains: server:",
"platform: vsphere: failureDomains: zone:",
"platform: vsphere: failureDomains: topology: datacenter:",
"platform: vsphere: failureDomains: topology: datastore:",
"platform: vsphere: failureDomains: topology: folder:",
"platform: vsphere: failureDomains: topology: networks:",
"platform: vsphere: failureDomains: topology: resourcePool:",
"platform: vsphere: ingressVIPs:",
"platform: vsphere: vcenters:",
"platform: vsphere: vcenters: datacenters:",
"platform: vsphere: vcenters: password:",
"platform: vsphere: vcenters: port:",
"platform: vsphere: vcenters: server:",
"platform: vsphere: vcenters: user:",
"platform: vsphere: apiVIP:",
"platform: vsphere: cluster:",
"platform: vsphere: datacenter:",
"platform: vsphere: defaultDatastore:",
"platform: vsphere: folder:",
"platform: vsphere: ingressVIP:",
"platform: vsphere: network:",
"platform: vsphere: password:",
"platform: vsphere: resourcePool:",
"platform: vsphere: username:",
"platform: vsphere: vCenter:",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 8 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 9 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 10 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vmc/installing-vmc-network-customizations |
Chapter 4. Migrating to Data Grid 8 APIs | Chapter 4. Migrating to Data Grid 8 APIs Find changes to Data Grid APIs that affect migration to Data Grid 8. API deprecations and removals In addition to details in this section, you should also review API deprecations and removals. See Data Grid Deprecated Features and Functionality (Red Hat Knowledgebase). 4.1. REST API Data Grid 7.x used REST API v1 which is replaced with REST API v2 in Data Grid 8. The default context path for REST API v2 is <server_hostname>:11222/rest/v2/ . You must update any clients or scripts to use REST API v2. The performAsync header was also removed from the REST endpoint. Clients that perform async operations with the REST endpoint should manage the request and response on their side to avoid blocking. REST operations PUT , POST and DELETE methods now return status 204 (No content) instead of 200 if the request does not return resources. Additional resources Data Grid REST API 4.1.1. REST API changes in 8.3 Data Grid 8.3 includes the following changes to the REST API: Re-indexing caches The mass-index operation to re-index Data Grid caches is now deprecated. Update your clients to use reindex instead, as in the following example: Rolling upgrade operations The following operation is now deprecated: Use the source-connection operation instead: 4.2. Query API Data Grid 8 brings an updated Query API that is easier to use and has a lighter design. You get more efficient query performance with better results when searching across values in distributed caches, in comparison with Data Grid 7.x. Note Because the Data Grid 8 Query API has gone through considerable refactoring, there are several features and functional resources that are now deprecated. This topic focuses on changes that you need to make to your configuration when migrating from a version. Those changes should include planning to remove all deprecated interfaces, methods, or other configuration. See the Data Grid Deprecations and Removals (Red Hat Knowledgebase) for the complete list of deprecated features and functionality. Indexing Data Grid caches The Data Grid Lucene Directory, the InfinispanIndexManager and AffinityIndexManager index managers, and the Infinispan Directory provider for Hibernate Search are deprecated in 8.0 and removed in 8.1. The auto-config attribute is deprecated in 8.1 and planned for removal. The index() method that configures the index mode configuration is deprecated. When you enable indexing in your configuration, Data Grid automatically chooses the best way to manage indexing. Important Several indexing configuration values are no longer supported and result in fatal configuration errors if you include them. You should make the following changes to your configuration: Change .indexing().index(Index.NONE) to indexing().enabled(false) Change all other enum values as follows: indexing().enabled(true) Declaratively, you do not need to specify enabled="true" if your configuration contains other indexing configuration elements. However, you must call the enabled() method if you programmatically configure indexing. Likewise Data Grid configuration in JSON format must explicitly enable indexing, for example: "indexing": { "enabled": "true" ... }, Indexed types You must declare all indexed types in the indexing configuration or Data Grid logs warning messages when undeclared types are used with indexed caches. This requirement applies to both Java classes and Protobuf types. Enabling indexing in Data Grid 8 Declaratively <distributed-cache name="my-cache"> <indexing> <indexed-entities> <indexed-entity>com.acme.query.test.Car</indexed-entity> <indexed-entity>com.acme.query.test.Truck</indexed-entity> </indexed-entities> </indexing> </distributed-cache> Programmatically import org.infinispan.configuration.cache.*; ConfigurationBuilder config=new ConfigurationBuilder(); config.indexing().enable().addIndexedEntity(Car.class).addIndexedEntity(Truck.class); Querying values in caches The org.infinispan.query.SearchManager interface is deprecated in Data Grid 8 and no longer supports Lucene and Hibernate Search native objects. Removed methods .getQuery() methods that take Lucene Queries. Use the alternative methods that take Ickle queries from the org.infinispan.query.Search entry point instead. Likewise it is no longer possible to specify multiple target entities classes when calling .getQuery() . The Ickle query string provides entities instead. .buildQueryBuilderForClass() that builds Hibernate Search queries directly. Use Ickle queries instead. The org.infinispan.query.CacheQuery interface is also deprecated. You should obtain the org.infinispan.query.dsl.Query interface from the Search.getQueryFactory() method instead. Note that instances of org.infinispan.query.dsl.Query no longer cache query results and allow queries to be re-executed when calling methods such as list() . Entity mappings You must now annotate fields that require sorting with @SortableField in all cases. Additional resources Data Grid Query API Data Grid Deprecations and Removals 4.2.1. Query API changes in 8.2 Data Grid upgrades Hibernate and Apache Lucene libraries to improve performance and functionality for the Query API. As part of this upgrade, Data Grid introduces new indexing capabilities and removes several Hibernate and Lucene annotations. Query statistics Data Grid 8.2 exposes statistics for queries and indexes only if you enable statistics declaratively in the cache configuration as follows: <replicated-cache name="myReplicatedCache" statistics="true"> <!-- Cache configuration goes here. --> </replicated-cache> Enabling statistics for queries and indexes through JMX is no longer possible. Indexing Data Grid caches Declaring indexed types Data Grid 8.1 allowed undeclared types in the indexing configuration. As of Data Grid 8.2, you must declare all indexed types in the configuration. This requirement applies to both Java classes and Protobuf types. See the 8.1 migration details for more information on declaring indexed types. Index manager Data Grid 8.2 uses near-real-time as the default index manager and no longer requires configuration. Data Grid 8.1: <indexing> <property name="default.indexmanager">near-real-time</property> </indexing> Data Grid 8.2: <indexing enabled="true"/> Index reader and writer Data Grid 8.2 introduces an index reader and an index writer, both of which are internal components for creating indexes. To adapt your configuration, you should: Remove indexing configuration that uses the property element or .addProperty() method. Configure indexing behavior in one of the following ways: Declaratively: Add the <index-reader> and <index-writer> elements. Programmatically: Add the builder.indexing().reader() and builder.indexing().writer() methods. Reader refresh Use the refresh-interval attribute added in 8.2 to configure the refresh period for the index reader. Data Grid 8.1: <indexing> <property name="default.reader.async_refresh_period_ms">1000</property> </indexing> Data Grid 8.2: <indexing> <index-reader refresh-interval="1000"/> </indexing> Writer commit interval Use the commit-interval attribute added in 8.2 to configure the interval at which the index writer commits to index storage. In Data Grid 8.2 indexing is asynchronous by default and the default.worker.execution property is no longer used. Data Grid 8.1: <indexing> <property name="default.worker.execution">async</property> <property name="default.index_flush_interval">500</property> </indexing> Data Grid 8.2: <indexing> <index-writer commit-interval="500"/> </indexing> Lucene index tuning properties Data Grid 8.2 adds a ram-buffer-size attribute and an index-merge element with factor and max-size attributes that replace properties for tuning indexes. Data Grid 8.1: <indexing> <property name="default.indexwriter.merge_factor">30</property> <property name="default.indexwriter.merge_max_size">1024</property> <property name="default.indexwriter.ram_buffer_size">256</property> </indexing> Data Grid 8.2: <indexing> <index-writer ram-buffer-size="256"> <index-merge factor="30" max-size="1024"/> </index-writer> </indexing> Index storage Data Grid 8.2 includes a storage attribute that replaces the property element configuration in versions. The storage attribute lets you configure whether to store indexes in JVM heap or on the host file system. File system storage Data Grid 8.1: <indexing> <property name="default.directory_provider">filesystem</property> <property name="default.indexBase">USD{java.io.tmpdir}/baseDir</property> </indexing> Data Grid 8.2: <indexing storage="filesystem" path="USD{java.io.tmpdir}/baseDir"/> JVM heap storage Data Grid 8.1: <indexing> <property name="default.directory_provider">local-heap</property> </indexing> Data Grid 8.2: <indexing storage="local-heap"> </indexing> Adapting index properties When migrating your indexing configuration to Data Grid 8.2, you should also make the following changes: Remove the lucene_version property. Important Do not use indexes that you created with older Lucene versions with Data Grid 8.2. After you adapt your indexing configuration, you should rebuild the index when you start Data Grid for the first time to complete the migration to Data Grid 8.2. Remove the default.sharding_strategy.nbr_of_shards property. This property is deprecated without a replacement. Remove the infinispan.query.lucene.max-boolean-clauses property. As of Data Grid 8.2 you should set this as a JVM property. Hibernate and Lucene annotations For information about migrating Hibernate and Lucene annotations, such as @Field , @Indexed , @SortableField , and others, refer to the Annotation mapping section of the Hibernate Search Migration Guide . Additional resources Data Grid Query API Data Grid Deprecations and Removals Hibernate Search Migration Guide: Annotation mapping 4.2.2. Query API changes in 8.3 Data Grid 8.3 removes the IndexedQueryMode parameter. Data Grid automatically detects the optimal mode for querying caches and ignored this optional parameter in earlier versions. Additional resources Querying Data Grid Caches Data Grid Query API Data Grid Deprecations and Removals 4.2.3. Query API changes in 8.4 Data Grid native annotations Data Grid 8.4 introduces new indexing annotations: @Indexed , @Basic , @Decimal , @Keyword , @Text , and @Embedded . Each of the annotations supports a set of attributes that you can use to further describe how an entity is indexed. These new annotations replaced Hibernate Search annotations, which means that you are no longer required to annotate your Java classes with the @ProtoDoc annotation for remote caches. All annotations are copied as comments to the generated .proto files. The following table summarizes the mapping of fields between Hibernate Search 5 (HS5) annotations and Data Grid native annotations: HS5 annotations Indexing attributes Data Grid native annotations Description @Field(index=Index.YES) searchable @Basic, @Decimal, @Keyword, @Text Fields previously marked as indexed are now searchable. @Field(store = Store.YES) projectable = true @Basic, @Decimal, @Keyword, @Text Fields previously marked as stored are now projectable. type String && @Field(analyze = Analyze.YES) analyzer = "<definition>" @Text String fields that were marked with analyzer definitions continue to be analyzed during indexing. @Field(analyze = Analyze.NO) && (@Field(store = Store.YES) OR @Field(sortable = Sortable.YES)) sortable = true @Basic, @Decimal, @Keyword Fields that were not analyzed but were either stored in the index or marked as sortable are now sortable. N/A aggregable = true @Basic, @Decimal, @Keyword Performing aggregation operations by using the Hibernate Search 5 annotations was not possible. N/A normalizer = "lowercase" @Keyword Mapping fields that were analyzed or normalized is not possible due to the potential data loss in the process. Query efficiency You can limit the number of returned results for a query instance by using the default-max-results cache property. The default value of default-max-results is 100. Limiting the number of results returned by a query significantly improves the performance of queries that don't have an explicit limit set. Additional resources Querying Data Grid Caches Data Grid Query API | [
"/v2/caches/<cacheName>/search/indexes?action=reindex",
"POST /v2/caches/<cacheName>?action=disconnect-source",
"DELETE /v2/caches/<cacheName>/rolling-upgrade/source-connection",
"\"indexing\": { \"enabled\": \"true\" },",
"<distributed-cache name=\"my-cache\"> <indexing> <indexed-entities> <indexed-entity>com.acme.query.test.Car</indexed-entity> <indexed-entity>com.acme.query.test.Truck</indexed-entity> </indexed-entities> </indexing> </distributed-cache>",
"import org.infinispan.configuration.cache.*; ConfigurationBuilder config=new ConfigurationBuilder(); config.indexing().enable().addIndexedEntity(Car.class).addIndexedEntity(Truck.class);",
"<replicated-cache name=\"myReplicatedCache\" statistics=\"true\"> <!-- Cache configuration goes here. --> </replicated-cache>",
"<indexing> <property name=\"default.indexmanager\">near-real-time</property> </indexing>",
"<indexing enabled=\"true\"/>",
"<indexing> <property name=\"default.reader.async_refresh_period_ms\">1000</property> </indexing>",
"<indexing> <index-reader refresh-interval=\"1000\"/> </indexing>",
"<indexing> <property name=\"default.worker.execution\">async</property> <property name=\"default.index_flush_interval\">500</property> </indexing>",
"<indexing> <index-writer commit-interval=\"500\"/> </indexing>",
"<indexing> <property name=\"default.indexwriter.merge_factor\">30</property> <property name=\"default.indexwriter.merge_max_size\">1024</property> <property name=\"default.indexwriter.ram_buffer_size\">256</property> </indexing>",
"<indexing> <index-writer ram-buffer-size=\"256\"> <index-merge factor=\"30\" max-size=\"1024\"/> </index-writer> </indexing>",
"<indexing> <property name=\"default.directory_provider\">filesystem</property> <property name=\"default.indexBase\">USD{java.io.tmpdir}/baseDir</property> </indexing>",
"<indexing storage=\"filesystem\" path=\"USD{java.io.tmpdir}/baseDir\"/>",
"<indexing> <property name=\"default.directory_provider\">local-heap</property> </indexing>",
"<indexing storage=\"local-heap\"> </indexing>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/migrating_to_data_grid_8/api-migration |
Chapter 6. Postinstallation cluster tasks | Chapter 6. Postinstallation cluster tasks After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements. 6.1. Available cluster customizations You complete most of the cluster configuration and customization after you deploy your OpenShift Container Platform cluster. A number of configuration resources are available. Note If you install your cluster on IBM Z, not all features and functions are available. You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider. For current documentation of the settings that you control by using these resources, use the oc explain command, for example oc explain builds --api-version=config.openshift.io/v1 6.1.1. Cluster configuration resources All cluster configuration resources are globally scoped (not namespaced) and named cluster . Resource name Description apiserver.config.openshift.io Provides API server configuration such as certificates and certificate authorities . authentication.config.openshift.io Controls the identity provider and authentication configuration for the cluster. build.config.openshift.io Controls default and enforced configuration for all builds on the cluster. console.config.openshift.io Configures the behavior of the web console interface, including the logout behavior . featuregate.config.openshift.io Enables FeatureGates so that you can use Tech Preview features. image.config.openshift.io Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). ingress.config.openshift.io Configuration details related to routing such as the default domain for routes. oauth.config.openshift.io Configures identity providers and other behavior related to internal OAuth server flows. project.config.openshift.io Configures how projects are created including the project template. proxy.config.openshift.io Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. scheduler.config.openshift.io Configures scheduler behavior such as profiles and default node selectors. 6.1.2. Operator configuration resources These configuration resources are cluster-scoped instances, named cluster , which control the behavior of a specific component as owned by a particular Operator. Resource name Description consoles.operator.openshift.io Controls console appearance such as branding customizations config.imageregistry.operator.openshift.io Configures OpenShift image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. config.samples.operator.openshift.io Configures the Samples Operator to control which example image streams and templates are installed on the cluster. 6.1.3. Additional configuration resources These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances. Resource name Instance name Namespace Description alertmanager.monitoring.coreos.com main openshift-monitoring Controls the Alertmanager deployment parameters. ingresscontroller.operator.openshift.io default openshift-ingress-operator Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. 6.1.4. Informational Resources You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly. Resource name Instance name Description clusterversion.config.openshift.io version In OpenShift Container Platform 4.12, you must not customize the ClusterVersion resource for production clusters. Instead, follow the process to update a cluster . dns.config.openshift.io cluster You cannot modify the DNS settings for your cluster. You can view the DNS Operator status . infrastructure.config.openshift.io cluster Configuration details allowing the cluster to interact with its cloud provider. network.config.openshift.io cluster You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation . 6.2. Adding worker nodes After you deploy your OpenShift Container Platform cluster, you can add worker nodes to scale cluster resources. There are different ways you can add worker nodes depending on the installation method and the environment of your cluster. 6.2.1. Adding worker nodes to installer-provisioned infrastructure clusters For installer-provisioned infrastructure clusters, you can manually or automatically scale the MachineSet object to match the number of available bare-metal hosts. To add a bare-metal host, you must configure all network prerequisites, configure an associated baremetalhost object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console. Adding worker nodes using the web console Adding worker nodes using YAML in the web console Manually adding a worker node to an installer-provisioned infrastructure cluster 6.2.2. Adding worker nodes to user-provisioned infrastructure clusters For user-provisioned infrastructure clusters, you can add worker nodes by using a RHEL or RHCOS ISO image and connecting it to your cluster using cluster Ignition config files. For RHEL worker nodes, the following example uses Ansible playbooks to add worker nodes to the cluster. For RHCOS worker nodes, the following example uses an ISO image and network booting to add worker nodes to the cluster. Adding RHCOS worker nodes to a user-provisioned infrastructure cluster Adding RHEL worker nodes to a user-provisioned infrastructure cluster 6.2.3. Adding worker nodes to clusters managed by the Assisted Installer For clusters managed by the Assisted Installer, you can add worker nodes by using the Red Hat OpenShift Cluster Manager console, the Assisted Installer REST API or you can manually add worker nodes using an ISO image and cluster Ignition config files. Adding worker nodes using the OpenShift Cluster Manager Adding worker nodes using the Assisted Installer REST API Manually adding worker nodes to a SNO cluster 6.2.4. Adding worker nodes to clusters managed by the multicluster engine for Kubernetes For clusters managed by the multicluster engine for Kubernetes, you can add worker nodes by using the dedicated multicluster engine console. Scaling hosts to an infrastructure environment 6.3. Adjust worker nodes If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new compute machine sets, scale them up, then scale the original compute machine set down before removing them. 6.3.1. Understanding the difference between compute machine sets and the machine config pool MachineSet objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider. The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades. The MachineConfigPool object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool. The NodeSelector object can be replaced with a reference to the MachineSet object. 6.3.2. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machine -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines 6.3.3. The compute machine set deletion policy Random , Newest , and Oldest are the three supported deletion options. The default is Random , meaning that random machines are chosen and deleted when scaling compute machine sets down. The deletion policy can be set according to the use case by modifying the particular compute machine set: spec: deletePolicy: <delete_policy> replicas: <desired_replica_count> Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/delete-machine=true to the machine of interest, regardless of the deletion policy. Important By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker compute machine set to 0 unless you first relocate the router pods. Note Custom compute machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker compute machine sets are scaling down. This prevents service disruption. 6.3.4. Creating default cluster-wide node selectors You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes. With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. Note You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. Procedure To add a default cluster-wide node selector: Edit the Scheduler Operator CR to add the default cluster-wide node selectors: USD oc edit scheduler cluster Example Scheduler Operator CR with a node selector apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false 1 Add a node selector with the appropriate <key>:<value> pairs. After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy. Add labels to a node by using a compute machine set or editing the node directly: Use a compute machine set to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1 1 Add a <key>/<value> pair for each label. For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ... Redeploy the nodes associated with that compute machine set by scaling down to 0 and scaling up the nodes: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.25.0 Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the node using the oc get command: USD oc get nodes -l <key>=<value>,<key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.25.0 6.3.5. Creating user workloads in AWS Local Zones After you create an Amazon Web Service (AWS) Local Zone environment, and you deploy your cluster, you can use edge worker nodes to create user workloads in Local Zone subnets. After the openshift-installer creates the cluster, the installation program automatically specifies a taint effect of NoSchedule to each edge worker node. This means that a scheduler does not add a new pod, or deployment, to a node if the pod does not match the specified tolerations for a taint. You can modify the taint for better control over how each node creates a workload in each Local Zone subnet. The openshift-installer creates the compute machine set manifests file with node-role.kubernetes.io/edge and node-role.kubernetes.io/worker labels applied to each edge worker node that is located in a Local Zone subnet. Prerequisites You have access to the OpenShift CLI ( oc ). You deployed your cluster in a Virtual Private Cloud (VPC) with defined Local Zone subnets. You ensured that the compute machine set for the edge workers on Local Zone subnets specifies the taints for node-role.kubernetes.io/edge . Procedure Create a deployment resource YAML file for an example application to be deployed in the edge worker node that operates in a Local Zone subnet. Ensure that you specify the correct tolerations that match the taints for the edge worker node. Example of a configured deployment resource for an edge worker node that operates in a Local Zone subnet kind: Namespace apiVersion: v1 metadata: name: <local_zone_application_namespace> --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> namespace: <local_zone_application_namespace> spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 1 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment 2 metadata: name: <local_zone_application> 3 namespace: <local_zone_application_namespace> 4 spec: selector: matchLabels: app: <local_zone_application> replicas: 1 template: metadata: labels: app: <local_zone_application> zone-group: USD{ZONE_GROUP_NAME} 5 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 6 machine.openshift.io/zone-group: USD{ZONE_GROUP_NAME} tolerations: 7 - key: "node-role.kubernetes.io/edge" operator: "Equal" value: "" effect: "NoSchedule" containers: - image: openshift/origin-node command: - "/bin/socat" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080 volumeMounts: - mountPath: "/mnt/storage" name: data volumes: - name: data persistentVolumeClaim: claimName: <pvc_name> 1 storageClassName : For the Local Zone configuration, you must specify gp2-csi . 2 kind : Defines the deployment resource. 3 name : Specifies the name of your Local Zone application. For example, local-zone-demo-app-nyc-1 . 4 namespace: Defines the namespace for the AWS Local Zone where you want to run the user workload. For example: local-zone-app-nyc-1a . 5 zone-group : Defines the group to where a zone belongs. For example, us-east-1-iah-1 . 6 nodeSelector : Targets edge worker nodes that match the specified labels. 7 tolerations : Sets the values that match with the taints defined on the MachineSet manifest for the Local Zone node. Create a service resource YAML file for the node. This resource exposes a pod from a targeted edge worker node to services that run inside your Local Zone network. Example of a configured service resource for an edge worker node that operates in a Local Zone subnet apiVersion: v1 kind: Service 1 metadata: name: <local_zone_application> namespace: <local_zone_application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <local_zone_application> 1 kind : Defines the service resource. 2 selector: Specifies the label type applied to managed pods. steps Optional: Use the AWS Load Balancer (ALB) Operator to expose a pod from a targeted edge worker node to services that run inside a Local Zone subnet from a public network. See Installing the AWS Load Balancer Operator . Additional resources Installing a cluster using AWS Local Zones Understanding taints and tolerations Using taints and tolerations to control logging pod placement 6.4. Improving cluster stability in high latency environments using worker latency profiles If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator need change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are pre-defined with carefully tuned values to control the reaction of the cluster to increased latency. No need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. 6.4.1. Understanding worker latency profiles Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency , node-monitor-grace-period , default-not-ready-toleration-seconds and default-unreachable-toleration-seconds . These parameters can use values which allow you control the reaction of the cluster to latency issues without needing to determine the best values using manual methods. Important Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability. All worker latency profiles configure the following parameters: node-status-update-frequency Specifies how often the kubelet posts node status to the API server. node-monitor-grace-period Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node. default-not-ready-toleration-seconds Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node. default-unreachable-toleration-seconds Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node. The following Operators monitor the changes to the worker latency profiles and respond accordingly: The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes. The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes. The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes. While the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections: Default worker latency profile With the Default profile, each Kubelet updates it's status every 10 seconds ( node-status-update-frequency ). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds ( node-monitor-grace-period ). The Kubernetes Controller Manager waits 40 seconds for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node. If a pod on that node has the NoExecute taint, the pod is run according to tolerationSeconds . If the pod has no taint, it will be evicted in 300 seconds ( default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server ). Profile Component Parameter Value Default kubelet node-status-update-frequency 10s Kubelet Controller Manager node-monitor-grace-period 40s Kubernetes API Server Operator default-not-ready-toleration-seconds 300s Kubernetes API Server Operator default-unreachable-toleration-seconds 300s Medium worker latency profile Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual. The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value MediumUpdateAverageReaction kubelet node-status-update-frequency 20s Kubelet Controller Manager node-monitor-grace-period 2m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s Low worker latency profile Use the LowUpdateSlowReaction profile if the network latency is extremely high. The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value LowUpdateSlowReaction kubelet node-status-update-frequency 1m Kubelet Controller Manager node-monitor-grace-period 5m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s 6.4.2. Using and changing worker latency profiles To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases. You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction . Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default . Note You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster. Procedure To move from the default worker latency profile: Move to the medium worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.workerLatencyProfile: MediumUpdateAverageReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1 # ... 1 Specifies the medium worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Optional: Move to the low worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Change the spec.workerLatencyProfile value to LowUpdateSlowReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1 # ... 1 Specifies use of the low worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Verification When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied: USD oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5 Example output # ... - lastTransitionTime: "2022-07-11T19:47:10Z" reason: ProfileUpdated status: "False" type: WorkerLatencyProfileProgressing - lastTransitionTime: "2022-07-11T19:47:10Z" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: "True" type: WorkerLatencyProfileComplete - lastTransitionTime: "2022-07-11T19:20:11Z" reason: AsExpected status: "False" type: WorkerLatencyProfileDegraded - lastTransitionTime: "2022-07-11T19:20:36Z" status: "False" # ... 1 Specifies that the profile is applied and active. To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value. 6.5. Managing control plane machines Control plane machine sets provide management capabilities for control plane machines that are similar to what compute machine sets provide for compute machines. The availability and initial status of control plane machine sets on your cluster depend on your cloud provider and the version of OpenShift Container Platform that you installed. For more information, see Getting started with control plane machine sets . 6.6. Creating infrastructure machine sets for production environments You can create a compute machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets . To create an infrastructure node, you can use a machine set , assign a label to the nodes , or use a machine config pool . For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds . Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 6.6.1. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 6.6.2. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see Project node selectors . 6.6.3. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 6.7. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control. 6.7.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Equal 3 value: reserved 4 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Equal Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 4 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. 6.8. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created. 6.8.1. Moving the router You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.25.0 Because the role list includes infra , the pod is running on the correct node. 6.8.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 6.8.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 6.8.4. Moving logging resources For information about moving logging resources, see: Using node selectors to move logging resources Using taints and tolerations to control logging pod placement 6.9. Applying autoscaling to your cluster Applying autoscaling to an OpenShift Container Platform cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster. For more information, see Applying autoscaling to an OpenShift Container Platform cluster . 6.10. Configuring Linux cgroup v2 You can enable Linux control group version 2 (cgroup v2) in your cluster by editing the node.config object. Enabling cgroup v2 in OpenShift Container Platform disables all cgroups version 1 controllers and hierarchies in your cluster. cgroup v1 is enabled by default. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. Important OpenShift Container Platform cgroups version 2 support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.12 or later. You are logged in to the cluster as a user with administrative privileges. You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. Procedure Enable cgroup v2 on nodes: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.cgroupMode: "v2" : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: "v2" 1 ... 1 Enables cgroup v2. Verification Check the machine configs to see that the new machine configs were added: USD oc get mc Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m 1 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m worker-enable-cgroups-v2 3.2.0 10s 1 New machine configs are created, as expected. Check that the new kernelArguments were added to the new machine configs: USD oc describe mc <name> Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: - systemd_unified_cgroup_hierarchy=1 1 - cgroup_no_v1="all" 2 - psi=1 3 1 Enables cgroup v2 in systemd. 2 Disables cgroups v1. 3 Enables the Linux Pressure Stall Information (PSI) feature. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.25.0 After a node returns to the Ready state, start a debug session for that node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Check that the sys/fs/cgroup/cgroup2fs file is present on your nodes. This file is created by cgroup v2: USD stat -c %T -f /sys/fs/cgroup Example output cgroup2fs 6.11. Enabling Technology Preview features using FeatureGates You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the FeatureGate custom resource (CR). 6.11.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: TechPreviewNoUpgrade . This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. The following Technology Preview features are enabled by this feature set: CSI automatic migration. Enables automatic migration for supported in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers. Supported for: Azure File ( CSIMigrationAzureFile ) VMware vSphere ( CSIMigrationvSphere ) Shared Resources CSI Driver and Build CSI Volumes in OpenShift Builds. Enables the Container Storage Interface (CSI). ( CSIDriverSharedResource ) CSI volumes. Enables CSI volume support for the OpenShift Container Platform build system. ( BuildCSIVolumes ) Swap memory on nodes. Enables swap memory use for OpenShift Container Platform workloads on a per-node basis. ( NodeSwap ) cgroups v2. Enables cgroup v2, the version of the Linux cgroup API. ( CGroupsV2 ) crun. Enables the crun container runtime. ( Crun ) Insights Operator. Enables the Insights Operator, which gathers OpenShift Container Platform configuration data and sends it to Red Hat. ( InsightsConfigAPI ) External cloud providers. Enables support for external cloud providers for clusters on vSphere, AWS, Azure, and GCP. Support for OpenStack is GA. ( ExternalCloudProvider ) Pod topology spread constraints. Enables the matchLabelKeys parameter for pod topology constraints. The parameter is list of pod label keys to select the pods over which spreading will be calculated. ( MatchLabelKeysInPodTopologySpread ) Pod security admission enforcement. Enables restricted enforcement for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. ( OpenShiftPodSecurityAdmission ) Note Pod security admission restricted enforcement is only activated if you enable the TechPreviewNoUpgrade feature set after your OpenShift Container Platform cluster is installed. It is not activated if you enable the TechPreviewNoUpgrade feature set during cluster installation. 6.11.2. Enabling feature sets using the web console You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Procedure To enable feature sets: In the OpenShift Container Platform web console, switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate . On the Custom Resource Definition Details page, click the Instances tab. Click the cluster feature gate, then click the YAML tab. Edit the cluster instance to add specific feature sets: Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample Feature Gate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 6.11.3. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 6.12. etcd tasks Back up etcd, enable or disable etcd encryption, or defragment etcd data. 6.12.1. About etcd encryption By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted: Secrets Config maps Routes OAuth access tokens OAuth authorize tokens When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup. Note Etcd encryption only encrypts values, not keys. Resource types, namespaces, and object names are unencrypted. If etcd encryption is enabled during a backup, the static_kuberesources_<datetimestamp>.tar.gz file contains the encryption keys for the etcd snapshot. For security reasons, store this file separately from the etcd snapshot. However, this file is required to restore a state of etcd from the respective etcd snapshot. 6.12.2. Enabling etcd encryption You can enable etcd encryption to encrypt sensitive resources in your cluster. Warning Do not back up etcd resources until the initial encryption process is completed. If the encryption process is not completed, the backup might be only partially encrypted. After you enable etcd encryption, several changes can occur: The etcd encryption might affect the memory consumption of a few resources. You might notice a transient affect on backup performance because the leader must serve the backup. A disk I/O can affect the node that receives the backup state. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to aescbc : spec: encryption: type: aescbc 1 1 The aescbc type means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption. Save the file to apply the changes. The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd encryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: routes.route.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: secrets, configmaps If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. 6.12.3. Disabling etcd encryption You can disable encryption of etcd data in your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to identity : spec: encryption: type: identity 1 1 The identity type is the default value and means that no encryption is performed. Save the file to apply the changes. The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd decryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. 6.12.4. Backing up etcd data Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. Important Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have checked whether the cluster-wide proxy is enabled. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Procedure Start a debug session as root for a control plane node: USD oc debug --as-root node/<node_name> Change your root directory to /host in the debug shell: sh-4.4# chroot /host If the cluster-wide proxy is enabled, export the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables by running the following commands: USD export HTTP_PROXY=http://<your_proxy.example.com>:8080 USD export HTTPS_PROXY=https://<your_proxy.example.com>:8080 USD export NO_PROXY=<example.com> Run the cluster-backup.sh script in the debug shell and pass in the location to save the backup to. Tip The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup Example script output found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host: snapshot_<datetimestamp>.db : This file is the etcd snapshot. The cluster-backup.sh script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Note If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 6.12.5. Defragmenting etcd data For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes. Monitor these key metrics: etcd_server_quota_backend_bytes , which is the current quota limit etcd_mvcc_db_total_size_in_use_in_bytes , which indicates the actual database usage after a history compaction etcd_mvcc_db_total_size_in_bytes , which shows the database size, including free space waiting for defragmentation Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction. History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system. Defragmentation occurs automatically, but you can also trigger it manually. Note Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. 6.12.5.1. Automatic defragmentation The etcd Operator automatically defragments disks. No manual intervention is needed. Verify that the defragmentation process is successful by viewing one of these logs: etcd logs cluster-etcd-operator pod operator status error log Warning Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the running instance or the component resumes work again after the restart. Example log output for successful defragmentation etcd member has been defragmented: <member_name> , memberID: <member_id> Example log output for unsuccessful defragmentation failed defrag on member: <member_name> , memberID: <member_id> : <error_message> 6.12.5.2. Manual defragmentation A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases: When etcd uses more than 50% of its available space for more than 10 minutes When etcd is actively using less than 50% of its total database size for more than 10 minutes You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024 Warning Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. Follow this procedure to defragment etcd data on each etcd member. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Determine which etcd member is the leader, because the leader should be defragmented last. Get the list of etcd pods: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none> Choose a pod and run the following command to determine which etcd member is the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table Example output Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com . Defragment an etcd member. Connect to the running etcd container, passing in the name of a pod that is not the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com Unset the ETCDCTL_ENDPOINTS environment variable: sh-4.4# unset ETCDCTL_ENDPOINTS Defragment the etcd member: sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag Example output Finished defragmenting etcd member[https://localhost:2379] If a timeout error occurs, increase the value for --command-timeout until the command succeeds. Verify that the database size was reduced: sh-4.4# etcdctl endpoint status -w table --cluster Example output +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last. Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them. Check if there are any NOSPACE alarms: sh-4.4# etcdctl alarm list Example output memberID:12345678912345678912 alarm:NOSPACE Clear the alarms: sh-4.4# etcdctl alarm disarm 6.12.6. Restoring to a cluster state You can use a saved etcd backup to restore a cluster state or restore a cluster that has lost the majority of control plane hosts. Note If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. Important When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2. Prerequisites Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation. A healthy control plane host to use as the recovery host. SSH access to control plane hosts. A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz . Important For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. Procedure Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on. Establish SSH connectivity to each of the control plane nodes, including the recovery host. The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal. Important If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. Copy the etcd backup directory to the recovery control plane host. This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host. Stop the static pods on any other control plane nodes. Note You do not need to stop the static pods on the recovery host. Access a control plane host that is not the recovery host. Move the existing etcd pod file out of the kubelet manifest directory: USD sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp Verify that the etcd pods are stopped. USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" The output of this command should be empty. If it is not empty, wait a few minutes and check again. Move the existing Kubernetes API server pod file out of the kubelet manifest directory: USD sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp Verify that the Kubernetes API server pods are stopped. USD sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard" The output of this command should be empty. If it is not empty, wait a few minutes and check again. Move the etcd data directory to a different location: USD sudo mv -v /var/lib/etcd/ /tmp If the /etc/kubernetes/manifests/keepalived.yaml file exists and the node is deleted, follow these steps: Move the /etc/kubernetes/manifests/keepalived.yaml file out of the kubelet manifest directory: USD sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp Verify that any containers managed by the keepalived daemon are stopped: USD sudo crictl ps --name keepalived The output of this command should be empty. If it is not empty, wait a few minutes and check again. Check if the control plane has any Virtual IPs (VIPs) assigned to it: USD ip -o address | egrep '<api_vip>|<ingress_vip>' For each reported VIP, run the following command to remove it: USD sudo ip address del <reported_vip> dev <reported_vip_device> Repeat this step on each of the other control plane hosts that is not the recovery host. Access the recovery control plane host. If the keepalived daemon is in use, verify that the recovery control plane node owns the VIP: USD ip -o address | grep <api_vip> The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory: USD sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup Example script output ...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml Note The restore process can cause nodes to enter the NotReady state if the node certificates were updated after the last etcd backup. Check the nodes to ensure they are in the Ready state. Run the following command: USD oc get nodes -w Sample output NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.25.0 host-172-25-75-38 Ready infra,worker 3d20h v1.25.0 host-172-25-75-40 Ready master 3d20h v1.25.0 host-172-25-75-65 Ready master 3d20h v1.25.0 host-172-25-75-74 Ready infra,worker 3d20h v1.25.0 host-172-25-75-79 Ready worker 3d20h v1.25.0 host-172-25-75-86 Ready worker 3d20h v1.25.0 host-172-25-75-98 Ready infra,worker 3d20h v1.25.0 It can take several minutes for all nodes to report their state. If any nodes are in the NotReady state, log in to the nodes and remove all of the PEM files from the /var/lib/kubelet/pki directory on each node. You can SSH into the nodes or use the terminal window in the web console. USD ssh -i <ssh-key-path> core@<master-hostname> Sample pki directory sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem Restart the kubelet service on all control plane hosts. From the recovery host, run the following command: USD sudo systemctl restart kubelet.service Repeat this step on all other control plane hosts. Approve the pending CSRs: Note Clusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step. Get the list of current CSRs: USD oc get csr Example output 1 1 2 A pending kubelet service CSR (for user-provisioned installations). 3 4 A pending node-bootstrapper CSR. Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet service CSR: USD oc adm certificate approve <csr_name> Verify that the single member control plane has started successfully. From the recovery host, verify that the etcd container is running. USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" Example output 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0 From the recovery host, verify that the etcd pod is running. USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s If the status is Pending , or the output lists more than one running etcd pod, wait a few minutes and check again. If you are using the OVNKubernetes network plugin, delete the node objects that are associated with control plane hosts that are not the recovery control plane host. USD oc delete node <non-recovery-controlplane-host-1> <non-recovery-controlplane-host-2> Verify that the Cluster Network Operator (CNO) redeploys the OVN-Kubernetes control plane and that it no longer references the non-recovery controller IP addresses. To verify this result, regularly check the output of the following command. Wait until it returns an empty result before you proceed to restart the Open Virtual Network (OVN) Kubernetes pods on all of the hosts in the step. USD oc -n openshift-ovn-kubernetes get ds/ovnkube-master -o yaml | grep -E '<non-recovery_controller_ip_1>|<non-recovery_controller_ip_2>' Note It can take at least 5-10 minutes for the OVN-Kubernetes control plane to be redeployed and the command to return empty output. If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all of the hosts. Note Validating and mutating admission webhooks can reject pods. If you add any additional webhooks with the failurePolicy set to Fail , then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again. Alternatively, you can temporarily set the failurePolicy to Ignore while restoring the cluster state. After the cluster state is restored successfully, you can set the failurePolicy to Fail . Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run the following command: USD sudo rm -f /var/lib/ovn/etc/*.db Delete all OVN-Kubernetes control plane pods by running the following command: USD oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes Ensure that any OVN-Kubernetes control plane pods are deployed again and are in a Running state by running the following command: USD oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s Delete all ovnkube-node pods by running the following command: USD oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done Check the status of the OVN pods by running the following command: USD oc get po -n openshift-ovn-kubernetes If any OVN pods are in the Terminating status, delete the node that is running that OVN pod by running the following command. Replace <node> with the name of the node you are deleting: USD oc delete node <node> Use SSH to log in to the OVN pod node with the Terminating status by running the following command: USD ssh -i <ssh-key-path> core@<node> Move all PEM files from the /var/lib/kubelet/pki directory by running the following command: USD sudo mv /var/lib/kubelet/pki/* /tmp Restart the kubelet service by running the following command: USD sudo systemctl restart kubelet.service Return to the recovery etcd machines by running the following command: USD oc get csr Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending Approve all new CSRs by running the following command, replacing csr-<uuid> with the name of the CSR: oc adm certificate approve csr-<uuid> Verify that the node is back by running the following command: USD oc get nodes Ensure that all the ovnkube-node pods are deployed again and are in a Running state by running the following command: USD oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and etcd automatically scales up. If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal". Warning Do not delete and re-create the machine for the recovery host. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps: Warning Do not delete and re-create the machine for the recovery host. For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". Obtain the machine for one of the lost control plane hosts. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal . Delete the machine of the lost control plane host by running: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the lost control plane host. A new machine is automatically provisioned after deleting the machine of the lost control plane host. Verify that a new machine has been created by running: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Repeat these steps for each lost control plane host that is not the recovery host. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. In a separate terminal window within the recovery host, export the recovery kubeconfig file by running the following command: USD export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig Force etcd redeployment. In the same terminal window where you exported the recovery kubeconfig file, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml Verify all nodes are updated to the latest revision. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer. In a terminal that has access to the cluster as a cluster-admin user, run the following commands. Force a new rollout for the Kubernetes API server: USD oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes controller manager: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes scheduler: USD oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Verify that all control plane hosts have started and joined the cluster. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores Kubernetes API information. This includes OpenShift Container Platform components such as routers, Operators, and third-party components. Note On completion of the procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted. Consider using the system:admin kubeconfig file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig Issue the following command to display your authenticated user name: USD oc whoami Additional resources Recommended etcd practices Installing a user-provisioned cluster on bare metal Replacing a bare-metal control plane node 6.12.7. Issues and workarounds for restoring a persistent storage state If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated. Important The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. The following are some example scenarios that produce an out-of-date status: MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume. Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start. Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators. A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist. To fix this problem, an administrator must: Manually remove the PVs with invalid devices. Remove symlinks from respective nodes. Delete LocalVolume or LocalVolumeSet objects (see Storage Configuring persistent storage Persistent storage using local volumes Deleting the Local Storage Operator Resources ). 6.13. Pod disruption budgets Understand and configure pod disruption budgets. 6.13.1. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 6.13.2. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this paramter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this paramter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 6.14. Rotating or removing cloud provider credentials After installing OpenShift Container Platform, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation. To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. 6.14.1. Rotating cloud provider credentials with the Cloud Credential Operator utility The Cloud Credential Operator (CCO) utility ccoctl supports updating secrets for clusters installed on IBM Cloud. 6.14.1.1. Rotating API keys for IBM Cloud You can rotate API keys for your existing service IDs and update the corresponding secrets. Prerequisites You have configured the ccoctl binary. You have existing service IDs in a live OpenShift Container Platform cluster installed on IBM Cloud. Procedure Use the ccoctl utility to rotate your API keys for the service IDs and update the secrets: USD ccoctl ibmcloud refresh-keys \ --kubeconfig <openshift_kubeconfig_file> \ 1 --credentials-requests-dir <path_to_credential_requests_directory> \ 2 --name <name> 3 1 The kubeconfig file associated with the cluster. For example, <installation_directory>/auth/kubeconfig . 2 The directory where the credential requests are stored. 3 The name of the OpenShift Container Platform cluster. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. 6.14.2. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials RHV ovirt-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources vSphere CSI Driver Operator 6.14.3. Removing cloud provider credentials For clusters that use the Cloud Credential Operator (CCO) in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions. After installing an OpenShift Container Platform cluster with the CCO in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The CCO only requires the administrator-level credential during changes that require reconciling new or modified CredentialsRequest custom resources, such as minor cluster version updates. Note Before performing a minor version cluster update (for example, updating from OpenShift Container Platform 4.16 to 4.17), you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the update might be blocked. Prerequisites Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Delete Secret . Additional resources About the Cloud Credential Operator Amazon Web Services (AWS) secret format Microsoft Azure secret format Google Cloud Platform (GCP) secret format | [
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines",
"spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.25.0",
"oc label nodes <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>,<key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.25.0",
"kind: Namespace apiVersion: v1 metadata: name: <local_zone_application_namespace> --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> namespace: <local_zone_application_namespace> spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 1 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment 2 metadata: name: <local_zone_application> 3 namespace: <local_zone_application_namespace> 4 spec: selector: matchLabels: app: <local_zone_application> replicas: 1 template: metadata: labels: app: <local_zone_application> zone-group: USD{ZONE_GROUP_NAME} 5 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 6 machine.openshift.io/zone-group: USD{ZONE_GROUP_NAME} tolerations: 7 - key: \"node-role.kubernetes.io/edge\" operator: \"Equal\" value: \"\" effect: \"NoSchedule\" containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080 volumeMounts: - mountPath: \"/mnt/storage\" name: data volumes: - name: data persistentVolumeClaim: claimName: <pvc_name>",
"apiVersion: v1 kind: Service 1 metadata: name: <local_zone_application> namespace: <local_zone_application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <local_zone_application>",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1",
"oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5",
"- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved",
"tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Equal 3 value: reserved 4",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.25.0",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"oc edit nodes.config/cluster",
"apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v2\" 1",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m 1 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 3m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m worker-enable-cgroups-v2 3.2.0 10s",
"oc describe mc <name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: - systemd_unified_cgroup_hierarchy=1 1 - cgroup_no_v1=\"all\" 2 - psi=1 3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.25.0 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.25.0",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"stat -c %T -f /sys/fs/cgroup",
"cgroup2fs",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit featuregate cluster",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit apiserver",
"spec: encryption: type: aescbc 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: routes.route.openshift.io",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: secrets, configmaps",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io",
"oc edit apiserver",
"spec: encryption: type: identity 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc debug --as-root node/<node_name>",
"sh-4.4# chroot /host",
"export HTTP_PROXY=http://<your_proxy.example.com>:8080",
"export HTTPS_PROXY=https://<your_proxy.example.com>:8080",
"export NO_PROXY=<example.com>",
"sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup",
"found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup",
"etcd member has been defragmented: <member_name> , memberID: <member_id>",
"failed defrag on member: <member_name> , memberID: <member_id> : <error_message>",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table",
"Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com",
"sh-4.4# unset ETCDCTL_ENDPOINTS",
"sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag",
"Finished defragmenting etcd member[https://localhost:2379]",
"sh-4.4# etcdctl endpoint status -w table --cluster",
"+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"sh-4.4# etcdctl alarm list",
"memberID:12345678912345678912 alarm:NOSPACE",
"sh-4.4# etcdctl alarm disarm",
"sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp",
"sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"",
"sudo mv -v /var/lib/etcd/ /tmp",
"sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp",
"sudo crictl ps --name keepalived",
"ip -o address | egrep '<api_vip>|<ingress_vip>'",
"sudo ip address del <reported_vip> dev <reported_vip_device>",
"ip -o address | grep <api_vip>",
"sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup",
"...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml",
"oc get nodes -w",
"NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.25.0 host-172-25-75-38 Ready infra,worker 3d20h v1.25.0 host-172-25-75-40 Ready master 3d20h v1.25.0 host-172-25-75-65 Ready master 3d20h v1.25.0 host-172-25-75-74 Ready infra,worker 3d20h v1.25.0 host-172-25-75-79 Ready worker 3d20h v1.25.0 host-172-25-75-86 Ready worker 3d20h v1.25.0 host-172-25-75-98 Ready infra,worker 3d20h v1.25.0",
"ssh -i <ssh-key-path> core@<master-hostname>",
"sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s",
"oc delete node <non-recovery-controlplane-host-1> <non-recovery-controlplane-host-2>",
"oc -n openshift-ovn-kubernetes get ds/ovnkube-master -o yaml | grep -E '<non-recovery_controller_ip_1>|<non-recovery_controller_ip_2>'",
"sudo rm -f /var/lib/ovn/etc/*.db",
"oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes",
"oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s",
"oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done",
"oc get po -n openshift-ovn-kubernetes",
"oc delete node <node>",
"ssh -i <ssh-key-path> core@<node>",
"sudo mv /var/lib/kubelet/pki/* /tmp",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending",
"adm certificate approve csr-<uuid>",
"oc get nodes",
"oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig",
"oc whoami",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod",
"oc create -f </path/to/file> -n <project_name>",
"ccoctl ibmcloud refresh-keys --kubeconfig <openshift_kubeconfig_file> \\ 1 --credentials-requests-dir <path_to_credential_requests_directory> \\ 2 --name <name> 3",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge",
"oc get co kube-controller-manager",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/post-installation_configuration/post-install-cluster-tasks |
Chapter 9. Adding more RHEL compute machines to an OpenShift Container Platform cluster | Chapter 9. Adding more RHEL compute machines to an OpenShift Container Platform cluster If your OpenShift Container Platform cluster already includes Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, you can add more RHEL compute machines to it. 9.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.9, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines, which are also known as worker machines, in your cluster if you use a user-provisioned infrastructure installation. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane, or master, machines in your cluster. As with all installations that use user-provisioned infrastructure, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Important Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. You must add any RHEL compute machines to the cluster after you initialize the control plane. 9.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute, or worker, machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base OS: RHEL 7.9 or RHEL 7.9 through 8.7 with "Minimal" installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is deprecated. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. In addition, you cannot upgrade your RHEL 7 compute machines to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Enabling FIPS Mode in the RHEL 7 documentation. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. Additional resources Deleting nodes 9.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 9.3. Preparing an image for your cloud Amazon Machine Images (AMI) are required since various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You must list the AMI IDs so that the correct RHEL version needed for the compute machines is selected. 9.3.1. Listing latest available RHEL images on AWS AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The AWS Command Line Interface (CLI) is used to list the available Red Hat Enterprise Linux (RHEL) image IDs. Prerequisites You have installed the AWS CLI. Procedure Use this command to list RHEL 8.4 Amazon Machine Images (AMI): USD aws ec2 describe-images --owners 309956199498 \ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ 2 --filters "Name=name,Values=RHEL-8.4*" \ 3 --region us-east-1 \ 4 --output table 5 1 The --owners command option shows Red Hat images based on the account ID 309956199498 . Important This account ID is required to display AMI IDs for images that are provided by Red Hat. 2 The --query command option sets how the images are sorted with the parameters 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' . In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs. 3 The --filter command option sets which version of RHEL is shown. In this example, since the filter is set by "Name=name,Values=RHEL-8.4*" , then RHEL 8.4 AMIs are shown. 4 The --region command option sets where the region where an AMI is stored. 5 The --output command option sets how the results are displayed. Note When creating a RHEL compute machine for AWS, ensure that the AMI is RHEL 8.4. Example output ------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+ Additional resources You may also manually import RHEL images to AWS . 9.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.9. For RHEL 7 nodes, you must enable the following repositories: # subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-fast-datapath-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-optional-rpms" \ --enable="rhel-7-server-ose-4.9-rpms" Note Use of RHEL 7 nodes is deprecated and planned for removal in a future release of OpenShift Container Platform 4. For RHEL 8 nodes, you must enable the following repositories: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.9-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 9.5. Attaching the role permissions to RHEL instance in AWS Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node. Procedure From the AWS IAM console, create your desired IAM role . Attach the IAM role to the desired worker node. Additional resources See Required AWS permissions for IAM roles . 9.6. Tagging a RHEL worker node as owned or shared A cluster uses the value of the kubernetes.io/cluster/<clusterid>,Value=(owned|shared) tag to determine the lifetime of the resources related to the AWS cluster. The owned tag value should be added if the resource should be destroyed as part of destroying the cluster. The shared tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource. Procedure With RHEL compute machines, the RHEL worker instance must be tagged with kubernetes.io/cluster/<clusterid>=owned or kubernetes.io/cluster/<cluster-id>=shared . Note Do not tag all existing security groups with the kubernetes.io/cluster/<name>,Value=<clusterid> tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer. 9.7. Adding more RHEL compute machines to your cluster You can add more compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.9 cluster. Prerequisites Your OpenShift Container Platform cluster already contains RHEL compute nodes. The hosts file that you used to add the first RHEL compute machines to your cluster is on the machine that you use the run the playbook. The machine that you run the playbook on must be able to access all of the RHEL hosts. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. The kubeconfig file for the cluster and the installation program that you used to install the cluster are on the machine that you use the run the playbook. You must prepare the RHEL hosts for installation. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. If you use SSH key-based authentication, you must manage the key with an SSH agent. Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Procedure Open the Ansible inventory file at /<path>/inventory/hosts that defines your compute machine hosts and required variables. Rename the [new_workers] section of the file to [workers] . Add a [new_workers] section to the file and define the fully-qualified domain names for each new host. The file resembles the following example: In this example, the mycluster-rhel8-0.example.com and mycluster-rhel8-1.example.com machines are in the cluster and you add the mycluster-rhel8-2.example.com and mycluster-rhel8-3.example.com machines. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the scaleup playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 9.8. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 9.9. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Paramter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. | [
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.9-rpms\"",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.9-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com [new_workers] mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/machine_management/more-rhel-compute |
Chapter 3. About OpenShift Kubernetes Engine | Chapter 3. About OpenShift Kubernetes Engine As of 27 April 2020, Red Hat has decided to rename Red Hat OpenShift Container Engine to Red Hat OpenShift Kubernetes Engine to better communicate what value the product offering delivers. Red Hat OpenShift Kubernetes Engine is a product offering from Red Hat that lets you use an enterprise class Kubernetes platform as a production platform for launching containers. You download and install OpenShift Kubernetes Engine the same way as OpenShift Container Platform as they are the same binary distribution, but OpenShift Kubernetes Engine offers a subset of the features that OpenShift Container Platform offers. 3.1. Similarities and differences You can see the similarities and differences between OpenShift Kubernetes Engine and OpenShift Container Platform in the following table: Table 3.1. Product comparison for OpenShift Kubernetes Engine and OpenShift Container Platform OpenShift Kubernetes Engine OpenShift Container Platform Fully Automated Installers Yes Yes Over the Air Smart Upgrades Yes Yes Enterprise Secured Kubernetes Yes Yes Kubectl and oc automated command line Yes Yes Operator Lifecycle Manager (OLM) Yes Yes Administrator Web console Yes Yes OpenShift Virtualization Yes Yes User Workload Monitoring Yes Metering and Cost Management SaaS Service Yes Platform Logging Yes Developer Web Console Yes Developer Application Catalog Yes Source to Image and Builder Automation (Tekton) Yes OpenShift Service Mesh (Kiali, Jaeger, and OpenTracing) Yes OpenShift Serverless (Knative) Yes OpenShift Pipelines (Jenkins and Tekton) Yes Embedded Component of IBM Cloud Pak and RHT MW Bundles Yes 3.1.1. Core Kubernetes and container orchestration OpenShift Kubernetes Engine offers full access to an enterprise-ready Kubernetes environment that is easy to install and offers an extensive compatibility test matrix with many of the software elements that you might use in your data center. OpenShift Kubernetes Engine offers the same service level agreements, bug fixes, and common vulnerabilities and errors protection as OpenShift Container Platform. OpenShift Kubernetes Engine includes a Red Hat Enterprise Linux (RHEL) Virtual Datacenter and Red Hat Enterprise Linux CoreOS (RHCOS) entitlement that allows you to use an integrated Linux operating system with container runtime from the same technology provider. The OpenShift Kubernetes Engine subscription is compatible with the Red Hat OpenShift support for Windows Containers subscription. 3.1.2. Enterprise-ready configurations OpenShift Kubernetes Engine uses the same security options and default settings as the OpenShift Container Platform. Default security context constraints, pod security policies, best practice network and storage settings, service account configuration, SELinux integration, HAproxy edge routing configuration, and all other standard protections that OpenShift Container Platform offers are available in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers full access to the integrated monitoring solution that OpenShift Container Platform uses, which is based on Prometheus and offers deep coverage and alerting for common Kubernetes issues. OpenShift Kubernetes Engine uses the same installation and upgrade automation as OpenShift Container Platform. 3.1.3. Standard infrastructure services With an OpenShift Kubernetes Engine subscription, you receive support for all storage plug-ins that OpenShift Container Platform supports. In terms of networking, OpenShift Kubernetes Engine offers full and supported access to the Kubernetes Container Network Interface (CNI) and therefore allows you to use any third-party SDN that supports OpenShift Container Platform. It also allows you to use the included Open vSwitch software defined network to its fullest extent. OpenShift Kubernetes Engine allows you to take full advantage of the OVN Kubernetes overlay, Multus, and Multus plug-ins that are supported on OpenShift Container Platform. OpenShift Kubernetes Engine allows customers to use a Kubernetes Network Policy to create microsegmentation between deployed application services on the cluster. You can also use the Route API objects that are found in OpenShift Container Platform, including its sophisticated integration with the HAproxy edge routing layer as an out of the box Kubernetes ingress controller. 3.1.4. Core user experience OpenShift Kubernetes Engine users have full access to Kubernetes Operators, pod deployment strategies, Helm, and OpenShift Container Platform templates. OpenShift Kubernetes Engine users can use both the oc and kubectl command line interfaces. OpenShift Kubernetes Engine also offers an administrator web-based console that shows all aspects of the deployed container services and offers a container-as-a service experience. OpenShift Kubernetes Engine grants access to the Operator Life Cycle Manager that helps you control access to content on the cluster and life cycle operator-enabled services that you use. With an OpenShift Kubernetes Engine subscription, you receive access to the Kubernetes namespace, the OpenShift Project API object, and cluster-level Prometheus monitoring metrics and events. 3.1.5. Maintained and curated content With an OpenShift Kubernetes Engine subscription, you receive access to the OpenShift Container Platform content from the Red Hat Ecosystem Catalog and Red Hat Connect ISV marketplace. You can access all maintained and curated content that the OpenShift Container Platform eco-system offers. 3.1.6. OpenShift Container Storage compatible OpenShift Kubernetes Engine is compatible and supported with your purchase of OpenShift Container Storage. 3.1.7. Red Hat Middleware compatible OpenShift Kubernetes Engine is compatible and supported with individual Red Hat Middleware product solutions. Red Hat Middleware Bundles that include OpenShift embedded in them only contain OpenShift Container Platform. 3.1.8. OpenShift Serverless OpenShift Kubernetes Engine does not include OpenShift Serverless support. Please use OpenShift Container Platform for this support. 3.1.9. Quay Integration compatible OpenShift Kubernetes Engine is compatible and supported with a Red Hat Quay purchase. 3.1.10. OpenShift Virtualization OpenShift Kubernetes Engine includes support for the Red Hat product offerings derived from the kubevirt.io open source project. 3.1.11. Advanced cluster management OpenShift Kubernetes Engine is compatible with your additional purchase of {rh-rhacm-first} for Kubernetes. An OpenShift Kubernetes Engine subscription does not offer a cluster-wide log aggregation solution or support Elasticsearch, Fluentd, or Kibana based logging solutions. Similarly, the chargeback features found in OpenShift Container Platform or the console.redhat.com Cost Management SaaS service are not supported with OpenShift Kubernetes Engine. Red Hat Service Mesh capabilities derived from the open source istio.io and kiali.io projects that offer OpenTracing observability for containerized services on OpenShift Container Platform are not supported in OpenShift Kubernetes Engine. 3.1.12. Advanced networking The standard networking solutions in OpenShift Container Platform are supported with an OpenShift Kubernetes Engine subscription. OpenShift Container Platform's Kubernetes CNI plug-in for automation of multi-tenant network segmentation between OpenShift Container Platform projects is entitled for use with OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers all the granular control of the source IP addresses that are used by application services on the cluster. Those egress IP address controls are entitled for use with OpenShift Kubernetes Engine. OpenShift Container Platform offers ingress routing to on cluster services that use non-standard ports when no public cloud provider is in use via the VIP pods found in OpenShift Container Platform. That ingress solution is supported in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine users are supported for the Kubernetes ingress control object, which offers integrations with public cloud providers. Red Hat Service Mesh, which is derived from the istio.io open source project, is not supported in OpenShift Kubernetes Engine. Also, the Kourier ingress controller found in OpenShift Serverless is not supported on OpenShift Kubernetes Engine. 3.1.13. Developer experience With OpenShift Kubernetes Engine, the following capabilities are not supported: The CodeReady developer experience utilities and tools, such as CodeReady Workspaces. OpenShift Container Platform's pipeline feature that integrates a streamlined, Kubernetes-enabled Jenkins and Tekton experience in the user's project space. The OpenShift Container Platform's source-to-image feature, which allows you to easily deploy source code, dockerfiles, or container images across the cluster. Build strategies, builder pods, or Tekton for end user container deployments. The odo developer command line. The developer persona in the OpenShift Container Platform web console. 3.1.14. Feature summary The following table is a summary of the feature availability in OpenShift Kubernetes Engine and OpenShift Container Platform. Table 3.2. Features in OpenShift Kubernetes Engine and OpenShift Container Platform Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Fully Automated Installers (IPI) Included Included N/A Customizable Installers (UPI) Included Included N/A Disconnected Installation Included Included N/A Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) entitlement Included Included N/A Existing RHEL manual attach to cluster (BYO) Included Included N/A CRIO Runtime Included Included N/A Over the Air Smart Upgrades and Operating System (RHCOS) Management Included Included N/A Enterprise Secured Kubernetes Included Included N/A Kubectl and oc automated command line Included Included N/A Auth Integrations, RBAC, SCC, Multi-Tenancy Admission Controller Included Included N/A Operator Lifecycle Manager (OLM) Included Included N/A Administrator web console Included Included N/A OpenShift Virtualization Included Included OpenShift Virtualization Operator Compliance Operator provided by Red Hat Included Included Compliance Operator File Integrity Operator Included Included File Integrity Operator Gatekeeper Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Gatekeeper Operator Klusterlet Not Included - Requires separate subscription Not Included - Requires separate subscription N/A Kube Descheduler Operator provided by Red Hat Included Included Kube Descheduler Operator Local Storage provided by Red Hat Included Included Local Storage Operator Node Feature Discovery provided by Red Hat Included Included Node Feature Discovery Operator Performance Add-on Operator Included Included Performance Add-on Operator PTP Operator provided by Red Hat Included Included PTP Operator Service Telemetry Operator provided by Red Hat Included Included Service Telemetry Operator SR-IOV Network Operator Included Included SR-IOV Network Operator Vertical Pod Autoscaler Included Included Vertical Pod Autoscaler Cluster Monitoring (Prometheus) Included Included Cluster Monitoring Device Manager (for example, GPU) Included Included N/A Log Forwarding (with fluentd) Included Included Red Hat OpenShift Logging Operator (for log forwarding with fluentd) Telemeter and Insights Connected Experience Included Included N/A Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name OpenShift Cloud Manager SaaS Service Included Included N/A OVS and OVN SDN Included Included N/A MetalLB Included Included MetalLB Operator HAProxy Ingress Controller Included Included N/A Red Hat OpenStack Platform (RHOSP) Kuryr Integration Included Included N/A Ingress Cluster-wide Firewall Included Included N/A Egress Pod and Namespace Granular Control Included Included N/A Ingress Non-Standard Ports Included Included N/A Multus and Available Multus Plugins Included Included N/A Network Policies Included Included N/A IPv6 Single and Dual Stack Included Included N/A CNI Plugin ISV Compatibility Included Included N/A CSI Plugin ISV Compatibility Included Included N/A RHT and IBM middleware a la carte purchases (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A ISV or Partner Operator and Container Compatibility (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A Embedded OperatorHub Included Included N/A Embedded Marketplace Included Included N/A Quay Compatibility (not included) Included Included N/A RHEL Software Collections and RHT SSO Common Service (included) Included Included N/A Embedded Registry Included Included N/A Helm Included Included N/A User Workload Monitoring Not Included Included N/A Metering and Cost Management SaaS Service Not Included Included N/A Platform Logging Not Included Included Red Hat OpenShift Logging Operator OpenShift Elasticsearch Operator provided by Red Hat Not Included Cannot be run standalone N/A Developer Web Console Not Included Included N/A Developer Application Catalog Not Included Included N/A Source to Image and Builder Automation (Tekton) Not Included Included N/A OpenShift Service Mesh Not Included Included OpenShift Service Mesh Operator Service Binding Operator Not Included Included Service Binding Operator Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Red Hat OpenShift Serverless Not Included Included OpenShift Serverless Operator Web Terminal provided by Red Hat Not Included Included Web Terminal Operator Jenkins Operator provided by Red Hat Not Included Included Jenkins Operator Red Hat OpenShift Pipelines Operator Not Included Included OpenShift Pipelines Operator Embedded Component of IBM Cloud Pak and RHT MW Bundles Not Included Included N/A Red Hat OpenShift GitOps Not Included Included OpenShift GitOps Red Hat CodeReady Workspaces Not Included Included CodeReady Workspaces Red Hat CodeReady Containers Not Included Included N/A Quay Bridge Operator provided by Red Hat Not Included Included Quay Bridge Operator Quay Container Security provided by Red Hat Not Included Included Quay Operator Red Hat OpenShift distributed tracing platform Not Included Included Red Hat OpenShift distributed tracing platform Operator Red Hat OpenShift Kiali Not Included Included Kiali Operator Metering provided by Red Hat (deprecated) Not Included Included N/A Migration Toolkit for Containers Operator Not Included Included Migration Toolkit for Containers Operator Cost management for OpenShift Not included Included N/A Red Hat JBoss Web Server Not included Included JWS Operator Red Hat Build of Quarkus Not included Included N/A Kourier Ingress Controller Not included Included N/A RHT Middleware Bundles Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A IBM Cloud Pak Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A OpenShift Do ( odo ) Not included Included N/A Source to Image and Tekton Builders Not included Included N/A OpenShift Serverless FaaS Not included Included N/A IDE Integrations Not included Included N/A Windows Machine Config Operator Community Windows Machine Config Operator included - no subscription required Red Hat Windows Machine Config Operator included - Requires separate subscription Windows Machine Config Operator Red Hat Quay Not Included - Requires separate subscription Not Included - Requires separate subscription Quay Operator Red Hat Advanced Cluster Management Not Included - Requires separate subscription Not Included - Requires separate subscription Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security Not Included - Requires separate subscription Not Included - Requires separate subscription N/A OpenShift Container Storage Not Included - Requires separate subscription Not Included - Requires separate subscription OpenShift Container Storage Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Ansible Automation Platform Resource Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Ansible Automation Platform Resource Operator Business Automation provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Business Automation Operator Data Grid provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Data Grid Operator Red Hat Integration provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration Operator Red Hat Integration - 3Scale provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale Red Hat Integration - 3Scale APICast gateway provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale APIcast Red Hat Integration - AMQ Broker Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Broker Red Hat Integration - AMQ Broker LTS Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Interconnect Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Interconnect Red Hat Integration - AMQ Online Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Streams Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Streams Red Hat Integration - Camel K Not Included - Requires separate subscription Not Included - Requires separate subscription Camel K Red Hat Integration - Fuse Console Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Console Red Hat Integration - Fuse Online Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Online Red Hat Integration - Service Registry Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Service Registry API Designer provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription API Designer JBoss EAP provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription JBoss EAP JBoss Web Server provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription JBoss Web Server Smart Gateway Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Smart Gateway Operator Kubernetes NMState Operator Included Included N/A 3.2. Subscription Limitations OpenShift Kubernetes Engine is a subscription offering that provides OpenShift Container Platform with a limited set of supported features at a lower list price. OpenShift Kubernetes Engine and OpenShift Container Platform are the same product and, therefore, all software and features are delivered in both. There is only one download, OpenShift Container Platform. OpenShift Kubernetes Engine uses the OpenShift Container Platform documentation and support services and bug errata for this reason. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/about/oke-about |
Chapter 2. Logging 6.0 | Chapter 2. Logging 6.0 2.1. Release notes 2.1.1. Logging 6.0.3 This release includes RHBA-2024:10991 . 2.1.1.1. New features and enhancements With this update, the Loki Operator supports the configuring of the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. ( LOG-6421 ) 2.1.1.2. Bug fixes Before this update, the collector used the default settings to collect audit logs, which did not account for back pressure from output receivers. With this update, the audit log collection is optimized for file handling and log reading. ( LOG-6034 ) Before this update, any namespace containing openshift or kube was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces: default , kube , openshift , and namespaces that begin with openshift- or kube- . ( LOG-6204 ) Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. ( LOG-6343 ) Before this update, pipeline validation might enter an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. ( LOG-6352 ) Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. ( LOG-6406 ) Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. ( LOG-6441 ) Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. ( LOG-6486 ) Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. ( LOG-6543 ) 2.1.1.3. CVEs CVE-2019-12900 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-10963 CVE-2024-50602 2.1.2. Logging 6.0.2 This release includes RHBA-2024:10051 . 2.1.2.1. Bug fixes Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. ( LOG-5325 ) Before this update, the collector would discard audit log messages that exceeded the configured threshold. This modifies the audit configuration thresholds for the maximum line size as well as the number of bytes read during a read cycle. ( LOG-5998 ) Before this update, the Cluster Logging Operator did not watch and reconcile resources associated with an instance of a ClusterLogForwarder like it did in prior releases. This update modifies the operator to watch and reconcile all resources it owns and creates. ( LOG-6264 ) Before this update, log events with an unknown severity level sent to Google Cloud Logging would trigger a warning in the vector collector, which would then default the severity to 'DEFAULT'. With this update, log severity levels are now standardized to match Google Cloud Logging specifications, and audit logs are assigned a severity of 'INFO'. ( LOG-6296 ) Before this update, when infrastructure namespaces were included in application inputs, the log_type was set as application . With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure . ( LOG-6354 ) Before this update, specifying a value for the syslog.enrichment field of the ClusterLogForwarder added namespace_name , container_name , and pod_name to the messages of non-container logs. With this update, only container logs include namespace_name , container_name , and pod_name in their messages when syslog.enrichment is set. ( LOG-6402 ) 2.1.2.2. CVEs CVE-2024-6119 CVE-2024-6232 2.1.3. Logging 6.0.1 This release includes OpenShift Logging Bug Fix Release 6.0.1 . 2.1.3.1. Bug fixes With this update, the default memory limit for the collector has been increased from 1024 Mi to 2024 Mi. However, users should always adjust their resource limits according to their cluster specifications and needs. ( LOG-6180 ) Before this update, the Loki Operator failed to add the default namespace label to all AlertingRule resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. ( LOG-6151 ) Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. ( LOG-6129 ) Before this update, it was possible to set log_source in the prune filter, which could lead to inconsistent log data. With this update, the configuration is validated before being applied, and any configuration that includes log_source in the prune filter is rejected. ( LOG-6202 ) 2.1.3.2. CVEs CVE-2024-24791 CVE-2024-34155 CVE-2024-34156 CVE-2024-34158 CVE-2024-6104 CVE-2024-6119 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 2.1.4. Logging 6.0.0 This release includes Logging for Red Hat OpenShift Bug Fix Release 6.0.0 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Table 2.1. Upstream component versions logging Version Component Version Operator eventrouter logfilemetricexporter loki lokistack-gateway opa-openshift vector 6.0 0.4 1.1 3.1.0 0.1 0.1 0.37.1 2.1.5. Removal notice With this release, logging no longer supports the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io custom resources. Refer to the product documentation for details on the replacement features. ( LOG-5803 ) With this release, logging no longer manages or deploys log storage (such as Elasticsearch), visualization (such as Kibana), or Fluentd-based log collectors. ( LOG-5368 ) Note In order to continue to use Elasticsearch and Kibana managed by the elasticsearch-operator, the administrator must modify those object's ownerRefs before deleting the ClusterLogging resource. 2.1.6. New features and enhancements This feature introduces a new architecture for logging for Red Hat OpenShift by shifting component responsibilities to their relevant Operators, such as for storage, visualization, and collection. It introduces the ClusterLogForwarder.observability.openshift.io API for log collection and forwarding. Support for the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io APIs, along with the Red Hat managed Elastic stack (Elasticsearch and Kibana), is removed. Users are encouraged to migrate to the Red Hat LokiStack for log storage. Existing managed Elasticsearch deployments can be used for a limited time. Automated migration for log collection is not provided, so administrators need to create a new ClusterLogForwarder.observability.openshift.io specification to replace their custom resources. Refer to the official product documentation for more details. ( LOG-3493 ) With this release, the responsibility for deploying the logging view plugin shifts from the Red Hat OpenShift Logging Operator to the Cluster Observability Operator (COO). For new log storage installations that need visualization, the Cluster Observability Operator and the associated UIPlugin resource must be deployed. Refer to the Cluster Observability Operator Overview product documentation for more details. ( LOG-5461 ) This enhancement sets default requests and limits for Vector collector deployments' memory and CPU usage based on Vector documentation recommendations. ( LOG-4745 ) This enhancement updates Vector to align with the upstream version v0.37.1. ( LOG-5296 ) This enhancement introduces an alert that triggers when log collectors buffer logs to a node's file system and use over 15% of the available space, indicating potential back pressure issues. ( LOG-5381 ) This enhancement updates the selectors for all components to use common Kubernetes labels. ( LOG-5906 ) This enhancement changes the collector configuration to deploy as a ConfigMap instead of a secret, allowing users to view and edit the configuration when the ClusterLogForwarder is set to Unmanaged. ( LOG-5599 ) This enhancement adds the ability to configure the Vector collector log level using an annotation on the ClusterLogForwarder, with options including trace, debug, info, warn, error, or off. ( LOG-5372 ) This enhancement adds validation to reject configurations where Amazon CloudWatch outputs use multiple AWS roles, preventing incorrect log routing. ( LOG-5640 ) This enhancement removes the Log Bytes Collected and Log Bytes Sent graphs from the metrics dashboard. ( LOG-5964 ) This enhancement updates the must-gather functionality to only capture information for inspecting Logging 6.0 components, including Vector deployments from ClusterLogForwarder.observability.openshift.io resources and the Red Hat managed LokiStack. ( LOG-5949 ) This enhancement improves Azure storage secret validation by providing early warnings for specific error conditions. ( LOG-4571 ) This enhancement updates the ClusterLogForwarder API to follow the Kubernetes standards. ( LOG-5977 ) Example of a new configuration in the ClusterLogForwarder custom resource for the updated API apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <name> spec: outputs: - name: <output_name> type: <output_type> <output_type>: tuning: deliveryMode: AtMostOnce 2.1.7. Technology Preview features This release introduces a Technology Preview feature for log forwarding using OpenTelemetry. A new output type,` OTLP`, allows sending JSON-encoded log records using the OpenTelemetry data model and resource semantic conventions. ( LOG-4225 ) 2.1.8. Bug fixes Before this update, the CollectorHighErrorRate and CollectorVeryHighErrorRate alerts were still present. With this update, both alerts are removed in the logging 6.0 release but might return in a future release. ( LOG-3432 ) 2.1.9. CVEs CVE-2024-34397 2.2. Logging 6.0 The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. 2.2.1. Inputs and Outputs Inputs specify the sources of logs to be forwarded. Logging provides built-in input types: application , infrastructure , and audit , which select logs from different parts of your cluster. You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. 2.2.2. Receiver Input Type The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog . The ReceiverSpec defines the configuration for a receiver input. 2.2.3. Pipelines and Filters Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. 2.2.4. Operator Behavior The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field: When set to Managed (default), the operator actively manages the logging resources to match the configuration defined in the spec. When set to Unmanaged , the operator does not take any action, allowing you to manually manage the logging components. 2.2.5. Validation Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. 2.2.6. Quick Start Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Procedure Install the Red Hat OpenShift Logging Operator , Loki Operator , and Cluster Observability Operator (COO) from OperatorHub. Create a secret to access an existing object storage bucket: Example command for AWS USD oc create secret generic logging-loki-s3 \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" \ -n openshift-logging Create a LokiStack custom resource (CR) in the openshift-logging namespace: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging Create a service account for the collector: USD oc create sa collector -n openshift-logging Bind the ClusterRole to the service account: USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging Create a UIPlugin to enable the Log section in the Observe tab: apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki Add additional roles to the collector service account: USD oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging Create a ClusterLogForwarder CR to configure log forwarding: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack Verification Verify that logs are visible in the Log section of the Observe tab in the OpenShift Container Platform web console. 2.3. Upgrading to Logging 6.0 Logging v6.0 is a significant upgrade from releases, achieving several longstanding goals of Cluster Logging: Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization). Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana). Deprecation of the Fluentd log collector implementation. Removal of support for ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io resources. Note The cluster-logging-operator does not provide an automated upgrade process. Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator . This documentation assists administrators in converting existing ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io specifications to the new API. Examples of migrated ClusterLogForwarder.observability.openshift.io resources for common use cases are included. 2.3.1. Using the oc explain command The oc explain command is an essential tool in the OpenShift CLI oc that provides detailed descriptions of the fields within Custom Resources (CRs). This command is invaluable for administrators and developers who are configuring or troubleshooting resources in an OpenShift cluster. 2.3.1.1. Resource Descriptions oc explain offers in-depth explanations of all fields associated with a specific object. This includes standard resources like pods and services, as well as more complex entities like statefulsets and custom resources defined by Operators. To view the documentation for the outputs field of the ClusterLogForwarder custom resource, you can use: USD oc explain clusterlogforwarders.observability.openshift.io.spec.outputs Note In place of clusterlogforwarder the short form obsclf can be used. This will display detailed information about these fields, including their types, default values, and any associated sub-fields. 2.3.1.2. Hierarchical Structure The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options. For instance, here's how you can drill down into the storage configuration for a LokiStack custom resource: USD oc explain lokistacks.loki.grafana.com USD oc explain lokistacks.loki.grafana.com.spec USD oc explain lokistacks.loki.grafana.com.spec.storage USD oc explain lokistacks.loki.grafana.com.spec.storage.schemas Each command reveals a deeper level of the resource specification, making the structure clear. 2.3.1.3. Type Information oc explain also indicates the type of each field (such as string, integer, or boolean), allowing you to verify that resource definitions use the correct data types. For example: USD oc explain lokistacks.loki.grafana.com.spec.size This will show that size should be defined using an integer value. 2.3.1.4. Default Values When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified. Again using lokistacks.loki.grafana.com as an example: USD oc explain lokistacks.spec.template.distributor.replicas Example output GROUP: loki.grafana.com KIND: LokiStack VERSION: v1 FIELD: replicas <integer> DESCRIPTION: Replicas defines the number of replica pods of the component. 2.3.2. Log Storage The only managed log storage solution available in this release is a Lokistack, managed by the loki-operator . This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process. Important To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the elasticsearch-operator , remove the owner references from the Elasticsearch resource named elasticsearch , and the Kibana resource named kibana in the openshift-logging namespace before removing the ClusterLogging resource named instance in the same namespace. Temporarily set ClusterLogging to state Unmanaged USD oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge Remove ClusterLogging ownerReferences from the Elasticsearch resource The following command ensures that ClusterLogging no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource's logStore field will no longer affect the Elasticsearch resource. USD oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge Remove ClusterLogging ownerReferences from the Kibana resource The following command ensures that ClusterLogging no longer owns the Kibana resource. Updates to the ClusterLogging resource's visualization field will no longer affect the Kibana resource. USD oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge Set ClusterLogging to state Managed USD oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge 2.3.3. Log Visualization The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator . 2.3.4. Log Collection and Forwarding Log collection and forwarding configurations are now specified under the new API , part of the observability.openshift.io API group. The following sections highlight the differences from the old API resources. Note Vector is the only supported collector implementation. 2.3.5. Management, Resource Allocation, and Workload Scheduling Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API. Configuration apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" spec: managementState: "Managed" collection: resources: limits: {} requests: {} nodeSelector: {} tolerations: {} Current Configuration apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: managementState: Managed collector: resources: limits: {} requests: {} nodeSelector: {} tolerations: {} 2.3.6. Input Specifications The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application , infrastructure , and audit to collect these sources. 2.3.6.1. Application Inputs Namespace and container inclusions and exclusions have been consolidated into a single field. 5.9 Application Input with Namespace and Container Includes and Excludes apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: namespaces: - foo - bar includes: - namespace: my-important container: main excludes: - container: too-verbose 6.0 Application Input with Namespace and Container Includes and Excludes apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: includes: - namespace: foo - namespace: bar - namespace: my-important container: main excludes: - container: too-verbose Note application , infrastructure , and audit are reserved words and cannot be used as names when defining an input. 2.3.6.2. Input Receivers Changes to input receivers include: Explicit configuration of the type at the receiver level. Port settings moved to the receiver level. 5.9 Input Receivers apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: an-http receiver: http: port: 8443 format: kubeAPIAudit - name: a-syslog receiver: type: syslog syslog: port: 9442 6.0 Input Receivers apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: an-http type: receiver receiver: type: http port: 8443 http: format: kubeAPIAudit - name: a-syslog type: receiver receiver: type: syslog port: 9442 2.3.7. Output Specifications High-level changes to output specifications include: URL settings moved to each output type specification. Tuning parameters moved to each output type specification. Separation of TLS configuration from authentication. Explicit configuration of keys and secret/configmap for TLS and authentication. 2.3.8. Secrets and TLS Configuration Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions. 2.3.9. Red Hat Managed Elasticsearch v5.9 Forwarding to Red Hat Managed Elasticsearch apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: elasticsearch v6.0 Forwarding to Red Hat Managed Elasticsearch apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: serviceAccount: name: <service_account_name> managementState: Managed outputs: - name: audit-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: audit-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: app-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: app-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: infra-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: infra-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector pipelines: - name: app inputRefs: - application outputRefs: - app-elasticsearch - name: audit inputRefs: - audit outputRefs: - audit-elasticsearch - name: infra inputRefs: - infrastructure outputRefs: - infra-elasticsearch 2.3.10. Red Hat Managed LokiStack v5.9 Forwarding to Red Hat Managed LokiStack apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: lokistack lokistack: name: lokistack-dev v6.0 Forwarding to Red Hat Managed LokiStack apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: lokistack-dev namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - outputRefs: - default-lokistack - inputRefs: - application - infrastructure 2.3.11. Filters and Pipeline Configuration Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from releases have been converted to filters in this release. Individual filters are defined in the filters specification and referenced by a pipeline. 5.9 Filters apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder spec: pipelines: - name: application-logs parse: json labels: foo: bar detectMultilineErrors: true 6.0 Filter Configuration apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: filters: - name: detectexception type: detectMultilineException - name: parse-json type: parse - name: labels type: openshiftLabels openshiftLabels: foo: bar pipelines: - name: application-logs filterRefs: - detectexception - labels - parse-json 2.3.12. Validation and Status Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time. Instances of the ClusterLogForwarder.observability.openshift.io must satisfy the following conditions before the operator will deploy the log collector: Authorized, Valid, Ready. An example of these conditions is: 6.0 Status Conditions apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder status: conditions: - lastTransitionTime: "2024-09-13T03:28:44Z" message: 'permitted to collect log types: [application]' reason: ClusterRolesExist status: "True" type: observability.openshift.io/Authorized - lastTransitionTime: "2024-09-13T12:16:45Z" message: "" reason: ValidationSuccess status: "True" type: observability.openshift.io/Valid - lastTransitionTime: "2024-09-13T12:16:45Z" message: "" reason: ReconciliationComplete status: "True" type: Ready filterConditions: - lastTransitionTime: "2024-09-13T13:02:59Z" message: filter "detectexception" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidFilter-detectexception - lastTransitionTime: "2024-09-13T13:02:59Z" message: filter "parse-json" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidFilter-parse-json inputConditions: - lastTransitionTime: "2024-09-13T12:23:03Z" message: input "application1" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidInput-application1 outputConditions: - lastTransitionTime: "2024-09-13T13:02:59Z" message: output "default-lokistack-application1" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidOutput-default-lokistack-application1 pipelineConditions: - lastTransitionTime: "2024-09-13T03:28:44Z" message: pipeline "default-before" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidPipeline-default-before Note Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue. 2.4. Configuring log forwarding The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. Key Functions of the ClusterLogForwarder Selects log messages using inputs Forwards logs to external destinations using outputs Filters, transforms, and drops log messages using filters Defines log forwarding pipelines connecting inputs, filters and outputs 2.4.1. Setting up log collection This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder . This was not required in releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. The Red Hat OpenShift Logging Operator provides collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. Setup log collection by binding the required cluster roles to your service account. 2.4.1.1. Legacy service accounts To use the existing legacy service account logcollector , create the following ClusterRoleBinding : USD oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector Additionally, create the following ClusterRoleBinding if collecting audit logs: USD oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector 2.4.1.2. Creating service accounts Prerequisites The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. You have administrator permissions. Procedure Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. Bind the appropriate cluster roles to the service account: Example binding command USD oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name> 2.4.1.2.1. Cluster Role Binding for your Service Account The role_binding.yaml file binds the ClusterLogging operator's ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8 1 roleRef: References the ClusterRole to which the binding applies. 2 apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. 3 kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. 4 name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. 5 subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. 6 kind: Specifies that the subject is a ServiceAccount. 7 Name: The name of the ServiceAccount being granted the permissions. 8 namespace: Indicates the namespace where the ServiceAccount is located. 2.4.1.2.2. Writing application logs The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system. 2.4.1.2.3. Writing audit logs The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 1 rules: Defines the permissions granted by this ClusterRole. 2 2 apiGroups: Specifies the API group loki.grafana.com. 3 3 loki.grafana.com: The API group responsible for Loki logging resources. 4 4 resources: Refers to the resource type this role manages, in this case, audit. 5 5 audit: Specifies that the role manages audit logs within Loki. 6 6 resourceNames: Defines the specific resources that the role can access. 7 7 logs: Refers to the logs that can be managed under this role. 8 8 verbs: The actions allowed on the resources. 9 9 create: Grants permission to create new audit logs. 2.4.1.2.4. Writing infrastructure logs The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. Sample YAML apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Specifies the API group for Loki-related resources. 3 loki.grafana.com: The API group managing the Loki logging system. 4 resources: Defines the resource type that this role can interact with. 5 infrastructure: Refers to infrastructure-related resources that this role manages. 6 resourceNames: Specifies the names of resources this role can manage. 7 logs: Refers to the log resources related to infrastructure. 8 verbs: The actions permitted by this role. 9 create: Grants permission to create infrastructure logs in the Loki system. 2.4.1.2.5. ClusterLogForwarder editor role The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Refers to the OpenShift-specific API group 3 obervability.openshift.io: The API group for managing observability resources, like logging. 4 resources: Specifies the resources this role can manage. 5 clusterlogforwarders: Refers to the log forwarding resources in OpenShift. 6 verbs: Specifies the actions allowed on the ClusterLogForwarders. 7 create: Grants permission to create new ClusterLogForwarders. 8 delete: Grants permission to delete existing ClusterLogForwarders. 9 get: Grants permission to retrieve information about specific ClusterLogForwarders. 10 list: Allows listing all ClusterLogForwarders. 11 patch: Grants permission to partially modify ClusterLogForwarders. 12 update: Grants permission to update existing ClusterLogForwarders. 13 watch: Grants permission to monitor changes to ClusterLogForwarders. 2.4.2. Modifying log level in collector To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace , debug , info , warn , error , and off . Example log level annotation apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug # ... 2.4.3. Managing the Operator The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: Managed (default) The operator will drive the logging resources to match the desired state in the CLF spec. Unmanaged The operator will not take any action related to the logging components. This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged . 2.4.4. Structure of the ClusterLogForwarder The CLF has a spec section that contains the following key components: Inputs Select log messages to be forwarded. Built-in input types application , infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. Outputs Define destinations to forward logs to. Each output has a unique name and type-specific configuration. Pipelines Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. Filters Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. 2.4.4.1. Inputs Inputs are configured in an array under spec.inputs . There are three built-in input types: application Selects logs from all application containers, excluding those in infrastructure namespaces. infrastructure Selects logs from nodes and from infrastructure components running in the following namespaces: default kube openshift Containing the kube- or openshift- prefix audit Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. 2.4.4.2. Outputs Outputs are configured in an array under spec.outputs . Each output must have a unique name and a type. Supported types are: azureMonitor Forwards logs to Azure Monitor. cloudwatch Forwards logs to AWS CloudWatch. elasticsearch Forwards logs to an external Elasticsearch instance. googleCloudLogging Forwards logs to Google Cloud Logging. http Forwards logs to a generic HTTP endpoint. kafka Forwards logs to a Kafka broker. loki Forwards logs to a Loki logging backend. lokistack Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy otlp Forwards logs using the OpenTelemetry Protocol. splunk Forwards logs to Splunk. syslog Forwards logs to an external syslog server. Each output type has its own configuration fields. 2.4.4.3. Pipelines Pipelines are configured in an array under spec.pipelines . Each pipeline must have a unique name and consists of: inputRefs Names of inputs whose logs should be forwarded to this pipeline. outputRefs Names of outputs to send logs to. filterRefs (optional) Names of filters to apply. The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. 2.4.4.4. Filters Filters are configured in an array under spec.filters . They can match incoming log messages based on the value of structured fields and modify or drop them. Administrators can configure the following types of filters: 2.4.4.5. Enabling multi-line exception detection Enables multi-line error detection of container logs. Warning Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. Example java exception java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10) To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters . Example ClusterLogForwarder CR apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name> 2.4.4.5.1. Details When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. The collector supports the following languages: Java JS Ruby Python Golang PHP Dart 2.4.4.6. Configuring content filters to drop unwanted log records When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. Procedure Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ... 1 Specifies the type of filter. The drop filter drops log records that match the filter configuration. 2 Specifies configuration options for applying the drop filter. 3 Specifies the configuration for tests that are used to evaluate whether a log record is dropped. If all the conditions specified for a test are true, the test passes and the log record is dropped. When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. 4 Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. 5 Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 6 Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 7 Specifies the pipeline that the drop filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional examples The following additional example shows how you can configure the drop filter to only keep higher priority log records: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ... In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: "^open" - test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ... 2.4.4.7. Overview of API audit filter OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: None : The event is dropped. Metadata : Audit metadata is included, request and response bodies are removed. Request : Audit metadata and the request body are included, the response body is removed. RequestResponse : All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy , while providing the following additional functions: Wildcards Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication . Resource \*/status matches Pod/status or Deployment/status . Default Rules Events that do not match any rule in the policy are filtered as follows: Read-only system events such as get , list , and watch are dropped. Service account write events that occur within the same namespace as the service account are dropped. All other events are forwarded, subject to any configured rate limits. To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. Omit Response Codes A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429] . If the value is an empty list, [] , then no status codes are omitted. The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. Note You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. Example audit policy apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata 1 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 2 The name of your audit policy. 2.4.4.8. Filtering application logs at input by including the label expressions or a matching label key and values You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. Procedure Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 type: application # ... 1 Specifies the label key to match. 2 Specifies the operator. Valid values include: In , NotIn , Exists , and DoesNotExist . 3 Specifies an array of string values. If the operator value is either Exists or DoesNotExist , the value array must be empty. 4 Specifies an exact key or value mapping. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.4.4.9. Configuring content filters to prune log records When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. Procedure Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: Important If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ... 1 Specify the type of filter. The prune filter prunes log records by configured fields. 2 Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . 3 Optional: Any fields that are specified in this array are removed from the log record. 4 Optional: Any fields that are not specified in this array are removed from the log record. 5 Specify the pipeline that the prune filter is applied to. Note The filters exempts the log_type , .log_source , and .message fields. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.4.5. Filtering the audit and infrastructure log inputs by source You can define the list of audit and infrastructure sources to collect the logs by using the input selector. Procedure Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ... 1 Specifies the list of infrastructure sources to collect. The valid sources include: node : Journal log from the node container : Logs from the workloads deployed in the namespaces 2 Specifies the list of audit sources to collect. The valid sources include: kubeAPI : Logs from the Kubernetes API servers openshiftAPI : Logs from the OpenShift API servers auditd : Logs from a node auditd service ovn : Logs from an open virtual network service Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.4.6. Filtering application logs at input by including or excluding the namespace or container name You can include or exclude the application logs based on the namespace and container name by using the input selector. Procedure Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 type: application # ... 1 Specifies that the logs are only collected from these namespaces. 2 Specifies that the logs are only collected from these containers. 3 Specifies the pattern of namespaces to ignore when collecting the logs. 4 Specifies the set of containers to ignore when collecting the logs. Note The excludes field takes precedence over the includes field. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.5. Storing logs with LokiStack You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. 2.5.1. Prerequisites You have installed the Loki Operator by using the CLI or web console. You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder . The serviceAccount is assigned collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles. 2.5.1.1. Core Setup and Configuration Role-based access controls, basic monitoring, and pod placement to deploy Loki. 2.5.2. Authorizing LokiStack rules RBAC permissions Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. The following cluster roles for alerting and recording rules are available for LokiStack: Rule name Description alertingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch AlertingRule resources within the loki.grafana.com/v1 API group. alertingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to AlertingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. alertingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete AlertingRule resources. alertingrules.loki.grafana.com-v1-view Users with this role can read AlertingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. recordingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch RecordingRule resources within the loki.grafana.com/v1 API group. recordingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to RecordingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. recordingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete RecordingRule resources. recordingrules.loki.grafana.com-v1-view Users with this role can read RecordingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. 2.5.2.1. Examples To apply cluster roles for a user, you must bind an existing cluster role to a specific username. Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: Example cluster role binding command for alerting rule CRUD permissions in a specific namespace USD oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username> The following command gives the specified user administrator permissions for alerting rules in all namespaces: Example cluster role binding command for administrator permissions USD oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username> 2.5.3. Creating a log-based alerting rule with Loki The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. If an AlertingRule CR includes an invalid LogQL expr , it is an invalid alerting rule. If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. If none of the above applies, an alerting rule is considered valid. Table 2.2. AlertingRule definitions Tenant type Valid namespaces for AlertingRule CRs application <your_application_namespace> audit openshift-logging infrastructure openshift-/* , kube-/\* , default Procedure Create an AlertingRule custom resource (CR): Example infrastructure AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 AlertingRule CRs for infrastructure tenants are only supported in the openshift-* , kube-\* , or default namespaces. 4 The value for kubernetes_namespace_name: must match the value for metadata.namespace . 5 The value of this mandatory field must be critical , warning , or info . 6 This field is mandatory. 7 This field is mandatory. Example application AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 Value for kubernetes_namespace_name: must match the value for metadata.namespace . 4 The value of this mandatory field must be critical , warning , or info . 5 The value of this mandatory field is a summary of the rule. 6 The value of this mandatory field is a detailed description of the rule. Apply the AlertingRule CR: USD oc apply -f <filename>.yaml 2.5.4. Configuring Loki to tolerate memberlist creation failure In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 2.5.5. Enabling stream-based retention with Loki You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Schema v13 is recommended. Procedure Create a LokiStack CR: Enable stream-based retention globally as shown in the following example: Example global stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream.spec: limits: Enable stream-based retention per-tenant basis as shown in the following example: Example per-tenant stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. Apply the LokiStack CR: USD oc apply -f <filename>.yaml Note This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. 2.5.6. Loki pod placement You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. Example LokiStack with node selectors apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: "" gateway: nodeSelector: node-role.kubernetes.io/infra: "" indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" ingester: nodeSelector: node-role.kubernetes.io/infra: "" querier: nodeSelector: node-role.kubernetes.io/infra: "" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" ruler: nodeSelector: node-role.kubernetes.io/infra: "" # ... 1 Specifies the component pod type that applies to the node selector. 2 Specifies the pods that are moved to nodes containing the defined label. Example LokiStack CR with node selectors and tolerations apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved # ... To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: USD oc explain lokistack.spec.template Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec. ... For more detailed information, you can add a specific field: USD oc explain lokistack.spec.template.compactor Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it. ... 2.5.6.1. Enhanced Reliability and Performance Configurations to ensure Loki's reliability and efficiency in production. 2.5.7. Enabling authentication to cloud-based log stores using short-lived tokens Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. Procedure Use one of the following options to enable authentication: If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. If you use the OpenShift CLI ( oc ) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. Example Azure sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region> Example AWS sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN> 2.5.8. Configuring Loki to tolerate node failure The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. 2.5.9. LokiStack behavior during cluster restarts When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. 2.5.9.1. Advanced Deployment and Scalability Specialized configurations for high availability, scalability, and error handling. 2.5.10. Zone aware data replication The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small , 1x.small , or 1x.medium , the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 2.5.11. Recovering Loki pods from failed zones In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: USD oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: USD oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: USD oc delete pvc <pvc_name> -n openshift-logging Delete the pod(s) by running the following command: USD oc delete pod <pod_name> -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 2.5.11.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. USD oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging 2.5.12. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 2.6. Visualization for logging Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator , which requires Operator installation. Important Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA. | [
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <name> spec: outputs: - name: <output_name> type: <output_type> <output_type>: tuning: deliveryMode: AtMostOnce",
"oc create secret generic logging-loki-s3 --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\" -n openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"oc explain clusterlogforwarders.observability.openshift.io.spec.outputs",
"oc explain lokistacks.loki.grafana.com oc explain lokistacks.loki.grafana.com.spec oc explain lokistacks.loki.grafana.com.spec.storage oc explain lokistacks.loki.grafana.com.spec.storage.schemas",
"oc explain lokistacks.loki.grafana.com.spec.size",
"oc explain lokistacks.spec.template.distributor.replicas",
"GROUP: loki.grafana.com KIND: LokiStack VERSION: v1 FIELD: replicas <integer> DESCRIPTION: Replicas defines the number of replica pods of the component.",
"oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Unmanaged\"}}' --type=merge",
"oc -n openshift-logging patch elasticsearch/elasticsearch -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge",
"oc -n openshift-logging patch kibana/kibana -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge",
"oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Managed\"}}' --type=merge",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" collection: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: managementState: Managed collector: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: namespaces: - foo - bar includes: - namespace: my-important container: main excludes: - container: too-verbose",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: includes: - namespace: foo - namespace: bar - namespace: my-important container: main excludes: - container: too-verbose",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http receiver: http: port: 8443 format: kubeAPIAudit - name: a-syslog receiver: type: syslog syslog: port: 9442",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http type: receiver receiver: type: http port: 8443 http: format: kubeAPIAudit - name: a-syslog type: receiver receiver: type: syslog port: 9442",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: elasticsearch",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: serviceAccount: name: <service_account_name> managementState: Managed outputs: - name: audit-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: audit-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: app-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: app-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: infra-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: infra-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector pipelines: - name: app inputRefs: - application outputRefs: - app-elasticsearch - name: audit inputRefs: - audit outputRefs: - audit-elasticsearch - name: infra inputRefs: - infrastructure outputRefs: - infra-elasticsearch",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: lokistack lokistack: name: lokistack-dev",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: lokistack-dev namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - outputRefs: - default-lokistack - inputRefs: - application - infrastructure",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder spec: pipelines: - name: application-logs parse: json labels: foo: bar detectMultilineErrors: true",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: filters: - name: detectexception type: detectMultilineException - name: parse-json type: parse - name: labels type: openshiftLabels openshiftLabels: foo: bar pipelines: - name: application-logs filterRefs: - detectexception - labels - parse-json",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder status: conditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: 'permitted to collect log types: [application]' reason: ClusterRolesExist status: \"True\" type: observability.openshift.io/Authorized - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ValidationSuccess status: \"True\" type: observability.openshift.io/Valid - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ReconciliationComplete status: \"True\" type: Ready filterConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"detectexception\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-detectexception - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"parse-json\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-parse-json inputConditions: - lastTransitionTime: \"2024-09-13T12:23:03Z\" message: input \"application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidInput-application1 outputConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: output \"default-lokistack-application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidOutput-default-lokistack-application1 pipelineConditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: pipeline \"default-before\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidPipeline-default-before",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system.",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/logging-6-0 |
Chapter 23. Manually Recovering File Split-brain | Chapter 23. Manually Recovering File Split-brain This chapter provides steps to manually recover from split-brain. Run the following command to obtain the path of the file that is in split-brain: From the command output, identify the files for which file operations performed from the client keep failing with Input/Output error. Close the applications that opened split-brain file from the mount point. If you are using a virtual machine, you must power off the machine. Obtain and verify the AFR changelog extended attributes of the file using the getfattr command. Then identify the type of split-brain to determine which of the bricks contains the 'good copy' of the file. For example, The extended attributes with trusted.afr. VOLNAME volname-client-<subvolume-index> are used by AFR to maintain changelog of the file. The values of the trusted.afr. VOLNAME volname-client-<subvolume-index> are calculated by the glusterFS client (FUSE or NFS-server) processes. When the glusterFS client modifies a file or directory, the client contacts each brick and updates the changelog extended attribute according to the response of the brick. subvolume-index is the brick number - 1 of gluster volume info VOLNAME output. For example, In the example above: Each file in a brick maintains the changelog of itself and that of the files present in all the other bricks in it's replica set as seen by that brick. In the example volume given above, all files in brick-a will have 2 entries, one for itself and the other for the file present in it's replica pair. The following is the changelog for brick2, trusted.afr.vol-client-0=0x000000000000000000000000 - is the changelog for itself (brick1) trusted.afr.vol-client-1=0x000000000000000000000000 - changelog for brick2 as seen by brick1 Likewise, all files in brick2 will have the following: trusted.afr.vol-client-0=0x000000000000000000000000 - changelog for brick1 as seen by brick2 trusted.afr.vol-client-1=0x000000000000000000000000 - changelog for itself (brick2) Note These files do not have entries for themselves, only for the other bricks in the replica. For example, brick1 will only have trusted.afr.vol-client-1 set and brick2 will only have trusted.afr.vol-client-0 set. Interpreting the changelog remains same as explained below. The same can be extended for other replica pairs. Interpreting changelog (approximate pending operation count) value Each extended attribute has a value which is 24 hexa decimal digits. First 8 digits represent changelog of data. Second 8 digits represent changelog of metadata. Last 8 digits represent Changelog of directory entries. Pictorially representing the same is as follows: For directories, metadata and entry changelogs are valid. For regular files, data and metadata changelogs are valid. For special files like device files and so on, metadata changelog is valid. When a file split-brain happens it could be either be data split-brain or meta-data split-brain or both. The following is an example of both data, metadata split-brain on the same file: Scrutinize the changelogs The changelog extended attributes on file /rhgs/brick1/a are as follows: The first 8 digits of trusted.afr.vol-client-0 are all zeros (0x00000000................) , The first 8 digits of trusted.afr.vol-client-1 are not all zeros (0x000003d7................). So the changelog on /rhgs/brick-a/a implies that some data operations succeeded on itself but failed on /rhgs/brick2/a . The second 8 digits of trusted.afr.vol-client-0 are all zeros (0x........00000000........) , and the second 8 digits of trusted.afr.vol-client-1 are not all zeros (0x........00000001........). So the changelog on /rhgs/brick1/a implies that some metadata operations succeeded on itself but failed on /rhgs/brick2/a . The changelog extended attributes on file /rhgs/brick2/a are as follows: The first 8 digits of trusted.afr.vol-client-0 are not all zeros (0x000003b0................). The first 8 digits of trusted.afr.vol-client-1 are all zeros (0x00000000................). So the changelog on /rhgs/brick2/a implies that some data operations succeeded on itself but failed on /rhgs/brick1/a . The second 8 digits of trusted.afr.vol-client-0 are not all zeros (0x........00000001........) The second 8 digits of trusted.afr.vol-client-1 are all zeros (0x........00000000........). So the changelog on /rhgs/brick2/a implies that some metadata operations succeeded on itself but failed on /rhgs/brick1/a . Here, both the copies have data, metadata changes that are not on the other file. Hence, it is both data and metadata split-brain. Deciding on the correct copy You must inspect stat and getfattr output of the files to decide which metadata to retain and contents of the file to decide which data to retain. To continue with the example above, here, we are retaining the data of /rhgs/brick1/a and metadata of /rhgs/brick2/a . Resetting the relevant changelogs to resolve the split-brain Resolving data split-brain You must change the changelog extended attributes on the files as if some data operations succeeded on /rhgs/brick1/a but failed on /rhgs/brick-b/a. But /rhgs/brick2/a should not have any changelog showing data operations succeeded on /rhgs/brick2/a but failed on /rhgs/brick1/a . You must reset the data part of the changelog on trusted.afr.vol-client-0 of /rhgs/brick2/a . Resolving metadata split-brain You must change the changelog extended attributes on the files as if some metadata operations succeeded on /rhgs/brick2/a but failed on /rhgs/brick1/a . But /rhgs/brick1/a should not have any changelog which says some metadata operations succeeded on /rhgs/brick1/a but failed on /rhgs/brick2/a . You must reset metadata part of the changelog on trusted.afr.vol-client-1 of /rhgs/brick1/a Run the following commands to reset the extended attributes. On /rhgs/brick2/a , for trusted.afr.vol-client-0 0x000003b00000000100000000 to 0x000000000000000100000000 , execute the following command: On /rhgs/brick1/a , for trusted.afr.vol-client-1 0x0000000000000000ffffffff to 0x000003d70000000000000000 , execute the following command: After you reset the extended attributes, the changelogs would look similar to the following: Resolving Directory entry split-brain AFR has the ability to conservatively merge different entries in the directories when there is a split-brain on directory. If on one brick directory storage has entries 1 , 2 and has entries 3 , 4 on the other brick then AFR will merge all of the entries in the directory to have 1, 2, 3, 4 entries in the same directory. But this may result in deleted files to re-appear in case the split-brain happens because of deletion of files on the directory. Split-brain resolution needs human intervention when there is at least one entry which has same file name but different gfid in that directory. For example: On brick-a the directory has 2 entries file1 with gfid_x and file2 . On brick-b directory has 2 entries file1 with gfid_y and file3 . Here the gfid's of file1 on the bricks are different. These kinds of directory split-brain needs human intervention to resolve the issue. You must remove either file1 on brick-a or the file1 on brick-b to resolve the split-brain. In addition, the corresponding gfid-link file must be removed. The gfid-link files are present in the . glusterfs directory in the top-level directory of the brick. If the gfid of the file is 0x307a5c9efddd4e7c96e94fd4bcdcbd1b (the trusted.gfid extended attribute received from the getfattr command earlier), the gfid-link file can be found at /rhgs/brick1/.glusterfs/30/7a/307a5c9efddd4e7c96e94fd4bcdcbd1b . Warning Before deleting the gfid-link , you must ensure that there are no hard links to the file present on that brick. If hard-links exist, you must delete them. Trigger self-heal by running the following command: or | [
"gluster volume heal VOLNAME info split-brain",
"getfattr -d -m . -e hex <file-path-on-brick>",
"getfattr -d -e hex -m. brick-a/file.txt #file: brick-a/file.txt security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000 trusted.afr.vol-client-2=0x000000000000000000000000 trusted.afr.vol-client-3=0x000000000200000000000000 trusted.gfid=0x307a5c9efddd4e7c96e94fd4bcdcbd1b",
"gluster volume info vol Volume Name: vol Type: Distributed-Replicate Volume ID: 4f2d7849-fbd6-40a2-b346-d13420978a01 Status: Created Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: brick1: server1:/rhgs/brick1 brick2: server1:/rhgs/brick2 brick3: server1:/rhgs/brick3 brick4: server1:/rhgs/brick4 brick5: server1:/rhgs/brick5 brick6: server1:/rhgs/brick6 brick7: server1:/rhgs/brick7 brick8: server1:/rhgs/brick8",
"Brick | Replica set | Brick subvolume index ---------------------------------------------------------------------------- /rhgs/brick1 | 0 | 0 /rhgs/brick2 | 0 | 1 /rhgs/brick3 | 1 | 2 /rhgs/brick4 | 1 | 3 /rhgs/brick5 | 2 | 4 /rhgs/brick6 | 2 | 5 /rhgs/brick7 | 3 | 6 /rhgs/brick8 | 3 | 7 ```",
"0x 000003d7 00000001 00000000110 | | | | | \\_ changelog of directory entries | \\_ changelog of metadata \\ _ changelog of data",
"getfattr -d -m . -e hex /rhgs/brick?/a getfattr: Removing leading '/' from absolute path names #file: rhgs/brick1/a trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000003d70000000100000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 #file: rhgs/brick2/a trusted.afr.vol-client-0=0x000003b00000000100000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57",
"setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /rhgs/brick2/a",
"setfattr -n trusted.afr.vol-client-1 -v 0x000003d70000000000000000 /rhgs/brick1/a",
"getfattr -d -m . -e hex /rhgs/brick?/a getfattr: Removing leading '/' from absolute path names #file: rhgs/brick1/a trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000003d70000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 #file: rhgs/brick2/a trusted.afr.vol-client-0=0x000000000000000100000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57",
"ls -l <file-path-on-gluster-mount>",
"gluster volume heal VOLNAME"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-manually_resolving_split-brains |
Chapter 3. glance | Chapter 3. glance The following chapter contains information about the configuration options in the glance service. 3.1. glance-api.conf This section contains options for the /etc/glance/glance-api.conf file. 3.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-api.conf file. . Configuration option = Default value Type Description admin_password = None string value The administrators password. If "use_user_token" is not in effect, then admin credentials can be specified. admin_role = admin string value Role used to identify an authenticated user as administrator. Provide a string value representing a Keystone role to identify an administrative user. Users with this role will be granted administrative privileges. The default value for this option is admin . Possible values: A string value which is a valid Keystone role Related options: None admin_tenant_name = None string value The tenant name of the administrative user. If "use_user_token" is not in effect, then admin tenant name can be specified. admin_user = None string value The administrators user name. If "use_user_token" is not in effect, then admin credentials can be specified. allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota allow_anonymous_access = False boolean value Allow limited access to unauthenticated users. Assign a boolean to determine API access for unauthenticated users. When set to False, the API cannot be accessed by unauthenticated users. When set to True, unauthenticated users can access the API with read-only privileges. This however only applies when using ContextMiddleware. Possible values: True False Related options: None allowed_rpc_exception_modules = ['glance.common.exception', 'builtins', 'exceptions'] list value List of allowed exception modules to handle RPC exceptions. Provide a comma separated list of modules whose exceptions are permitted to be recreated upon receiving exception data via an RPC call made to Glance. The default list includes glance.common.exception , builtins , and exceptions . The RPC protocol permits interaction with Glance via calls across a network or within the same system. Including a list of exception namespaces with this option enables RPC to propagate the exceptions back to the users. Possible values: A comma separated list of valid exception modules Related options: None api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default auth_region = None string value The region for the authentication service. If "use_user_token" is not in effect and using keystone auth, then region name can be specified. auth_strategy = noauth string value The strategy to use for authentication. If "use_user_token" is not in effect, then auth strategy can be specified. auth_url = None string value The URL to the keystone service. If "use_user_token" is not in effect and using keystone auth, then URL of keystone can be specified. backlog = 4096 integer value Set the number of incoming connection requests. Provide a positive integer value to limit the number of requests in the backlog queue. The default queue size is 4096. An incoming connection to a TCP listener socket is queued before a connection can be established with the server. Setting the backlog for a TCP socket ensures a limited queue size for incoming traffic. Possible values: Positive integer Related options: None bind_host = 0.0.0.0 host address value IP address to bind the glance servers to. Provide an IP address to bind the glance server to. The default value is 0.0.0.0 . Edit this option to enable the server to listen on one particular IP address on the network card. This facilitates selection of a particular network interface for the server. Possible values: A valid IPv4 address A valid IPv6 address Related options: None bind_port = None port value Port number on which the server will listen. Provide a valid port number to bind the server's socket to. This port is then set to identify processes and forward network messages that arrive at the server. The default bind_port value for the API server is 9292 and for the registry server is 9191. Possible values: A valid port number (0 to 65535) Related options: None ca_file = None string value Absolute path to the CA file. Provide a string value representing a valid absolute path to the Certificate Authority file to use for client authentication. A CA file typically contains necessary trusted certificates to use for the client authentication. This is essential to ensure that a secure connection is established to the server via the internet. Possible values: Valid absolute path to the CA file Related options: None cert_file = None string value Absolute path to the certificate file. Provide a string value representing a valid absolute path to the certificate file which is required to start the API service securely. A certificate file typically is a public key container and includes the server's public key, server name, server information and the signature which was a result of the verification process using the CA certificate. This is required for a secure connection establishment. Possible values: Valid absolute path to the certificate file Related options: None client_socket_timeout = 900 integer value Timeout for client connections' socket operations. Provide a valid integer value representing time in seconds to set the period of wait before an incoming connection can be closed. The default value is 900 seconds. The value zero implies wait forever. Possible values: Zero Positive integer Related options: None conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. data_api = glance.db.sqlalchemy.api string value Python module path of data access API. Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed. Possible values: glance.db.sqlalchemy.api glance.db.registry.api glance.db.simple.api If this option is set to glance.db.sqlalchemy.api then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs. Setting this option to glance.db.registry.api will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability. Note In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option enable_v2_registry must be set to True . Finally, when this configuration option is set to glance.db.simple.api , image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing. Related options: enable_v2_api enable_v2_registry Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_publisher_id = image.localhost string value Default publisher_id for outgoing Glance notifications. This is the value that the notification driver will use to identify messages for events originating from the Glance service. Typically, this is the hostname of the instance that generated the message. Possible values: Any reasonable instance identifier, for example: image.host1 Related options: None delayed_delete = False boolean value Turn on/off delayed delete. Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time ). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data. Note When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage. Possible values: True False Related options: scrub_time wakeup_time scrub_pool_size digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None disabled_notifications = [] list value List of notifications to be disabled. Specify a list of notifications that should not be emitted. A notification can be given either as a notification type to disable a single event notification, or as a notification group prefix to disable all event notifications within a group. Possible values: A comma-separated list of individual notification types or notification groups to be disabled. Currently supported groups: image image.member task metadef_namespace metadef_object metadef_property metadef_resource_type metadef_tag For a complete listing and description of each event refer to: http://docs.openstack.org/developer/glance/notifications.html Related options: None enable_v1_registry = True boolean value DEPRECATED FOR REMOVAL Deprecated since: Newton *Reason:*The Images (Glance) version 1 API has been DEPRECATED in the Newton release and will be removed on or after Pike release, following the standard OpenStack deprecation policy. Hence, the configuration options specific to the Images (Glance) v1 API are hereby deprecated and subject to removal. Operators are advised to deploy the Images (Glance) v2 API. enable_v2_api = True boolean value Deploy the v2 OpenStack Images API. When this option is set to True , Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API. NOTES: If this option is disabled, then the enable_v2_registry option, which is enabled by default, is also recommended to be disabled. Possible values: True False Related options: enable_v2_registry Deprecated since: Newton *Reason:*The Images (Glance) version 1 API has been DEPRECATED in the Newton release. It will be removed on or after Pike release, following the standard OpenStack deprecation policy. Once we remove the Images (Glance) v1 API, only the Images (Glance) v2 API can be deployed and will be enabled by default making this option redundant. enable_v2_registry = True boolean value Deploy the v2 API Registry service. When this option is set to True , the Registry service will be enabled in Glance for v2 API requests. NOTES: Use of Registry is optional in v2 API, so this option must only be enabled if both enable_v2_api is set to True and the data_api option is set to glance.db.registry.api . If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled. Possible values: True False Related options: enable_v2_api data_api Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html enabled_backends = None dict value Key:Value pair of store identifier and store type. In case of multiple backends should be separated using comma. enabled_import_methods = ['glance-direct', 'web-download', 'copy-image'] list value List of enabled Image Import Methods executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None http_keepalive = True boolean value Set keep alive option for HTTP over TCP. Provide a boolean value to determine sending of keep alive packets. If set to False , the server returns the header "Connection: close". If set to True , the server returns a "Connection: Keep-Alive" in its responses. This enables retention of the same TCP connection for HTTP conversations instead of opening a new one with each new request. This option must be set to False if the client socket connection needs to be closed explicitly after the response is received and read successfully by the client. Possible values: True False Related options: None image_cache_dir = None string value Base directory for image cache. This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete , invalid and queue . The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory. The queue`subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the `queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it'll be retried the time cache-prefetcher runs. Possible values: A valid path Related options: image_cache_sqlite_db image_cache_driver = sqlite string value The driver to use for image cache management. This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc. The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver . All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr . These drivers primarily differ in the way they store the information about cached images: The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed. Possible values: sqlite xattr Related options: None image_cache_max_size = 10737418240 integer value The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. Note This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit. Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here. Possible values: Any non-negative integer Related options: None image_cache_sqlite_db = cache.db string value The relative path to sqlite file database that will be used for image cache management. This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir . This is a lightweight database with just one table. Possible values: A valid relative path to sqlite file database Related options: image_cache_dir image_cache_stall_time = 86400 integer value The amount of time, in seconds, an incomplete image remains in the cache. Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it'll be removed by cache-cleaner on its run. It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space. Possible values: Any non-negative integer Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. key_file = None string value Absolute path to a private key file. Provide a string value representing a valid absolute path to a private key file which is required to establish the client-server connection. Possible values: Absolute path to the private key file Related options: None limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max location_strategy = location_order string value Strategy to determine the preference order of image locations. This configuration option indicates the strategy to determine the order in which an image's locations must be accessed to serve the image's data. Glance then retrieves the image data from the first responsive active location it finds in this list. This option takes one of two possible values location_order and store_type . The default value is location_order , which suggests that image data be served by using locations in the order they are stored in Glance. The store_type value sets the image location preference based on the order in which the storage backends are listed as a comma separated list for the configuration option store_type_preference . Possible values: location_order store_type Related options: store_type_preference log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_header_line = 16384 integer value Maximum line size of message headers. Provide an integer value representing a length to limit the size of message headers. The default value is 16384. Note max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). However, it is to be kept in mind that larger values for max_header_line would flood the logs. Setting max_header_line to 0 sets no limit for the line size of message headers. Possible values: 0 Positive integer Related options: None max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_request_id_length = 64 integer value Limit the request ID length. Provide an integer value to limit the length of the request ID to the specified length. The default value is 64. Users can change this to any ineteger value between 0 and 16384 however keeping in mind that a larger value may flood the logs. Possible values: Integer value between 0 and 16384 Related options: None metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir owner_is_tenant = True boolean value Set the image owner to tenant or the authenticated user. Assign a boolean value to determine the owner of an image. When set to True, the owner of the image is the tenant. When set to False, the owner of the image will be the authenticated user issuing the request. Setting it to False makes the image private to the associated user and sharing with other users within the same tenant (or "project") requires explicit image sharing via image membership. Possible values: True False Related options: None Deprecated since: Rocky Reason: The non-default setting for this option misaligns Glance with other OpenStack services with respect to resource ownership. Further, surveys indicate that this option is not used by operators. The option will be removed early in the S development cycle following the standard OpenStack deprecation policy. As the option is not in wide use, no migration path is proposed. property_protection_file = None string value The location of the property protection file. Provide a valid path to the property protection file which contains the rules for property protections and the roles/policies associated with them. A property protection file, when set, restricts the Glance image properties to be created, read, updated and/or deleted by a specific set of users that are identified by either roles or policies. If this configuration option is not set, by default, property protections won't be enforced. If a value is specified and the file is not found, the glance-api service will fail to start. More information on property protections can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html Possible values: Empty string Valid path to the property protection configuration file Related options: property_protection_rule_format property_protection_rule_format = roles string value Rule format for property protection. Provide the desired way to set property protection on Glance image properties. The two permissible values are roles and policies . The default value is roles . If the value is roles , the property protection file must contain a comma separated list of user roles indicating permissions for each of the CRUD operations on each property being protected. If set to policies , a policy defined in policy.json is used to express property protections for each of the CRUD operations. Examples of how property protections are enforced based on roles or policies can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html#examples Possible values: roles policies Related options: property_protection_file public_endpoint = None string value Public url endpoint to use for Glance versions response. This is the public url endpoint that will appear in the Glance "versions" response. If no value is specified, the endpoint that is displayed in the version's response is that of the host running the API service. Change the endpoint to represent the proxy URL if the API service is running behind a proxy. If the service is running behind a load balancer, add the load balancer's URL for this value. Possible values: None Proxy URL Load balancer URL Related options: None publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. registry_client_ca_file = None string value Absolute path to the Certificate Authority file. Provide a string value representing a valid absolute path to the certificate authority file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CA_FILE environment variable may be set to a filepath of the CA file. This option is ignored if the registry_client_insecure option is set to True . Possible values: String value representing a valid absolute path to the CA file. Related options: registry_client_protocol registry_client_insecure Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_cert_file = None string value Absolute path to the certificate file. Provide a string value representing a valid absolute path to the certificate file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CERT_FILE environment variable may be set to a filepath of the certificate file. Possible values: String value representing a valid absolute path to the certificate file. Related options: registry_client_protocol Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_insecure = False boolean value Set verification of the registry server certificate. Provide a boolean value to determine whether or not to validate SSL connections to the registry server. By default, this option is set to False and the SSL connections are validated. If set to True , the connection to the registry server is not validated via a certifying authority and the registry_client_ca_file option is ignored. This is the registry's equivalent of specifying --insecure on the command line using glanceclient for the API. Possible values: True False Related options: registry_client_protocol registry_client_ca_file Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_key_file = None string value Absolute path to the private key file. Provide a string value representing a valid absolute path to the private key file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_KEY_FILE environment variable may be set to a filepath of the key file. Possible values: String value representing a valid absolute path to the key file. Related options: registry_client_protocol Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_protocol = http string value Protocol to use for communication with the registry server. Provide a string value representing the protocol to use for communication with the registry server. By default, this option is set to http and the connection is not secure. This option can be set to https to establish a secure connection to the registry server. In this case, provide a key to use for the SSL connection using the registry_client_key_file option. Also include the CA file and cert file using the options registry_client_ca_file and registry_client_cert_file respectively. Possible values: http https Related options: registry_client_key_file registry_client_cert_file registry_client_ca_file Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_timeout = 600 integer value Timeout value for registry requests. Provide an integer value representing the period of time in seconds that the API server will wait for a registry request to complete. The default value is 600 seconds. A value of 0 implies that a request will never timeout. Possible values: Zero Positive integer Related options: None Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_host = 0.0.0.0 host address value Address the registry server is hosted on. Possible values: A valid IP or hostname Related options: None Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_port = 9191 port value Port the registry server is listening on. Possible values: A valid port number Related options: None Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. scrub_pool_size = 1 integer value The size of thread pool to be used for scrubbing images. When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing. Possible values: Any non-zero positive integer Related options: delayed_delete scrub_time = 0 integer value The amount of time, in seconds, to delay image scrubbing. When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing. It is important to realize that this has storage implications. The larger the scrub_time , the longer the time to reclaim backend storage from deleted images. Possible values: Any non-negative integer Related options: delayed_delete secure_proxy_ssl_header = None string value The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is "HTTP_X_FORWARDED_PROTO". send_identity_headers = False boolean value Send headers received from identity when making requests to registry. Typically, Glance registry can be deployed in multiple flavors, which may or may not include authentication. For example, trusted-auth is a flavor that does not require the registry service to authenticate the requests it receives. However, the registry service may still need a user context to be populated to serve the requests. This can be achieved by the caller (the Glance API usually) passing through the headers it received from authenticating with identity for the same request. The typical headers sent are X-User-Id , X-Tenant-Id , X-Roles , X-Identity-Status and X-Service-Catalog . Provide a boolean value to determine whether to send the identity headers to provide tenant and user information along with the requests to registry service. By default, this option is set to False , which means that user and tenant information is not available readily. It must be obtained by authenticating. Hence, if this is set to False , flavor must be set to value that either includes authentication or authenticated user context. Possible values: True False Related options: flavor show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy Deprecated since: Newton *Reason:*Use of this option, deprecated since Newton, is a security risk and will be removed once we figure out a way to satisfy those use cases that currently require it. An earlier announcement that the same functionality can be achieved with greater granularity by using policies is incorrect. You cannot work around this option via policy configuration at the present time, though that is the direction we believe the fix will take. Please keep an eye on the Glance release notes to stay up to date on progress in addressing this issue. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Set the wait time before a connection recheck. Provide a positive integer value representing time in seconds which is set as the idle wait time before a TCP keep alive packet can be sent to the host. The default value is 600 seconds. Setting tcp_keepidle helps verify at regular intervals that a connection is intact and prevents frequent TCP connection reestablishment. Possible values: Positive integer value representing time in seconds Related options: None transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. use_user_token = True boolean value Whether to pass through the user token when making requests to the registry. To prevent failures with token expiration during big files upload, it is recommended to set this parameter to False.If "use_user_token" is not in effect, then admin credentials can be specified. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. workers = None integer value Number of Glance worker processes to start. Provide a non-negative integer value to set the number of child process workers to service requests. By default, the number of CPUs available is set as the value for workers limited to 8. For example if the processor count is 6, 6 workers will be used, if the processor count is 24 only 8 workers will be used. The limit will only apply to the default value, if 24 workers is configured, 24 is used. Each worker process is made to listen on the port set in the configuration file and contains a greenthread pool of size 1000. Note Setting the number of workers to zero, triggers the creation of a single API process with a greenthread pool of size 1000. Possible values: 0 Positive integer value (typically equal to the number of CPUs) Related options: None 3.1.2. cinder The following table outlines the options available under the [cinder] group in the /etc/glance/glance-api.conf file. Table 3.1. cinder Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev2::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_use_multipath = False boolean value Flag to identify multipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None 3.1.3. cors The following table outlines the options available under the [cors] group in the /etc/glance/glance-api.conf file. Table 3.2. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['Content-MD5', 'X-Image-Meta-Checksum', 'X-Storage-Token', 'Accept-Encoding', 'X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Image-Meta-Checksum', 'X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 3.1.4. database The following table outlines the options available under the [database] group in the /etc/glance/glance-api.conf file. Table 3.3. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. use_tpool = False boolean value Enable the experimental use of thread pooling for all DB API calls 3.1.5. file The following table outlines the options available under the [file] group in the /etc/glance/glance-api.conf file. Table 3.4. file Configuration option = Default value Type Description filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None 3.1.6. glance.store.http.store The following table outlines the options available under the [glance.store.http.store] group in the /etc/glance/glance-api.conf file. Table 3.5. glance.store.http.store Configuration option = Default value Type Description http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file 3.1.7. glance.store.rbd.store The following table outlines the options available under the [glance.store.rbd.store] group in the /etc/glance/glance-api.conf file. Table 3.6. glance.store.rbd.store Configuration option = Default value Type Description rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None 3.1.8. glance.store.sheepdog.store The following table outlines the options available under the [glance.store.sheepdog.store] group in the /etc/glance/glance-api.conf file. Table 3.7. glance.store.sheepdog.store Configuration option = Default value Type Description sheepdog_store_address = 127.0.0.1 host address value Address to bind the Sheepdog daemon to. Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the sheep is 127.0.0.1. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images. Possible values: A valid IPv4 address A valid IPv6 address A valid hostname Related Options: sheepdog_store_port Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. sheepdog_store_chunk_size = 64 integer value Chunk size for images to be stored in Sheepdog data store. Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes. When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance. Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance. Possible values: Positive integer value representing size in mebibytes. Related Options: None Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. sheepdog_store_port = 7000 port value Port number on which the sheep daemon will listen. Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images. Possible values: A valid port number (0 to 65535) Related Options: sheepdog_store_address Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. 3.1.9. glance.store.swift.store The following table outlines the options available under the [glance.store.swift.store] group in the /etc/glance/glance-api.conf file. Table 3.8. glance.store.swift.store Configuration option = Default value Type Description default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option swift_buffer_on_upload is set to True. * This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size 3.1.10. glance.store.vmware_datastore.store The following table outlines the options available under the [glance.store.vmware_datastore.store] group in the /etc/glance/glance-api.conf file. Table 3.9. glance.store.vmware_datastore.store Configuration option = Default value Type Description vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.1.11. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-api.conf file. Table 3.10. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev2::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_use_multipath = False boolean value Flag to identify multipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None default_backend = None string value The store identifier for the default backend in which data will be stored. The value must be defined as one of the keys in the dict defined by the enabled_backends configuration option in the DEFAULT configuration group. If a value is not defined for this option: the consuming service may refuse to start store_add calls that do not specify a specific backend will raise a glance_store.exceptions.UnknownScheme exception Related Options: enabled_backends default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd sheepdog cinder vsphere Related Options: stores Deprecated since: Rocky Reason: This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None sheepdog_store_address = 127.0.0.1 host address value Address to bind the Sheepdog daemon to. Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the sheep is 127.0.0.1. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images. Possible values: A valid IPv4 address A valid IPv6 address A valid hostname Related Options: sheepdog_store_port Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. sheepdog_store_chunk_size = 64 integer value Chunk size for images to be stored in Sheepdog data store. Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes. When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance. Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance. Possible values: Positive integer value representing size in mebibytes. Related Options: None Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. sheepdog_store_port = 7000 port value Port number on which the sheep daemon will listen. Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images. Possible values: A valid port number (0 to 65535) Related Options: sheepdog_store_address Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd sheepdog cinder vmware Related Options: default_store Deprecated since: Rocky Reason: This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option swift_buffer_on_upload is set to True. * This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.1.12. image_format The following table outlines the options available under the [image_format] group in the /etc/glance/glance-api.conf file. Table 3.11. image_format Configuration option = Default value Type Description container_formats = ['ami', 'ari', 'aki', 'bare', 'ovf', 'ova', 'docker', 'compressed'] list value Supported values for the container_format image attribute disk_formats = ['ami', 'ari', 'aki', 'vhd', 'vhdx', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso', 'ploop'] list value Supported values for the disk_format image attribute 3.1.13. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/glance/glance-api.conf file. Table 3.12. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = admin string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" or "admin"(default). keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 3.1.14. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/glance/glance-api.conf file. Table 3.13. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 3.1.15. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/glance/glance-api.conf file. Table 3.14. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 3.1.16. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/glance/glance-api.conf file. Table 3.15. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate 3.1.17. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/glance/glance-api.conf file. Table 3.16. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 3.1.18. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/glance/glance-api.conf file. Table 3.17. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn't provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 3.1.19. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/glance/glance-api.conf file. Table 3.18. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 3.1.20. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-api.conf file. Table 3.19. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 3.1.21. paste_deploy The following table outlines the options available under the [paste_deploy] group in the /etc/glance/glance-api.conf file. Table 3.20. paste_deploy Configuration option = Default value Type Description config_file = None string value Name of the paste configuration file. Provide a string value representing the name of the paste configuration file to use for configuring pipelines for server application deployments. NOTES: Provide the name or the path relative to the glance directory for the paste configuration file and not the absolute path. The sample paste configuration file shipped with Glance need not be edited in most cases as it comes with ready-made pipelines for all common deployment flavors. If no value is specified for this option, the paste.ini file with the prefix of the corresponding Glance service's configuration file name will be searched for in the known configuration directories. (For example, if this option is missing from or has no value set in glance-api.conf , the service will look for a file named glance-api-paste.ini .) If the paste configuration file is not found, the service will not start. Possible values: A string value representing the name of the paste configuration file. Related Options: flavor flavor = None string value Deployment flavor to use in the server application pipeline. Provide a string value representing the appropriate deployment flavor used in the server application pipeline. This is typically the partial name of a pipeline in the paste configuration file with the service name removed. For example, if your paste section name in the paste configuration file is [pipeline:glance-api-keystone], set flavor to keystone . Possible values: String value representing a partial pipeline name. Related Options: config_file 3.1.22. profiler The following table outlines the options available under the [profiler] group in the /etc/glance/glance-api.conf file. Table 3.21. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 3.1.23. store_type_location_strategy The following table outlines the options available under the [store_type_location_strategy] group in the /etc/glance/glance-api.conf file. Table 3.22. store_type_location_strategy Configuration option = Default value Type Description store_type_preference = [] list value Preference order of storage backends. Provide a comma separated list of store names in the order in which images should be retrieved from storage backends. These store names must be registered with the stores configuration option. Note The store_type_preference configuration option is applied only if store_type is chosen as a value for the location_strategy configuration option. An empty list will not change the location order. Possible values: Empty list Comma separated list of registered store names. Legal values are: file http rbd swift sheepdog cinder vmware Related options: location_strategy stores 3.1.24. task The following table outlines the options available under the [task] group in the /etc/glance/glance-api.conf file. Table 3.23. task Configuration option = Default value Type Description task_executor = taskflow string value Task executor to be used to run task scripts. Provide a string value representing the executor to use for task executions. By default, TaskFlow executor is used. TaskFlow helps make task executions easy, consistent, scalable and reliable. It also enables creation of lightweight task objects and/or functions that are combined together into flows in a declarative manner. Possible values: taskflow Related Options: None task_time_to_live = 48 integer value Time in hours for which a task lives after, either succeeding or failing work_dir = None string value Absolute path to the work directory to use for asynchronous task operations. The directory set here will be used to operate over images - normally before they are imported in the destination store. Note When providing a value for work_dir , please make sure that enough space is provided for concurrent tasks to run efficiently without running out of space. A rough estimation can be done by multiplying the number of max_workers with an average image size (e.g 500MB). The image size estimation should be done based on the average size in your deployment. Note that depending on the tasks running you may need to multiply this number by some factor depending on what the task does. For example, you may want to double the available size if image conversion is enabled. All this being said, remember these are just estimations and you should do them based on the worst case scenario and be prepared to act in case they were wrong. Possible values: String value representing the absolute path to the working directory Related Options: None 3.1.25. taskflow_executor The following table outlines the options available under the [taskflow_executor] group in the /etc/glance/glance-api.conf file. Table 3.24. taskflow_executor Configuration option = Default value Type Description conversion_format = None string value Set the desired image conversion format. Provide a valid image format to which you want images to be converted before they are stored for consumption by Glance. Appropriate image format conversions are desirable for specific storage backends in order to facilitate efficient handling of bandwidth and usage of the storage infrastructure. By default, conversion_format is not set and must be set explicitly in the configuration file. The allowed values for this option are raw , qcow2 and vmdk . The raw format is the unstructured disk format and should be chosen when RBD or Ceph storage backends are used for image storage. qcow2 is supported by the QEMU emulator that expands dynamically and supports Copy on Write. The vmdk is another common disk format supported by many common virtual machine monitors like VMWare Workstation. Possible values: qcow2 raw vmdk Related options: disk_formats engine_mode = parallel string value Set the taskflow engine mode. Provide a string type value to set the mode in which the taskflow engine would schedule tasks to the workers on the hosts. Based on this mode, the engine executes tasks either in single or multiple threads. The possible values for this configuration option are: serial and parallel . When set to serial , the engine runs all the tasks in a single thread which results in serial execution of tasks. Setting this to parallel makes the engine run tasks in multiple threads. This results in parallel execution of tasks. Possible values: serial parallel Related options: max_workers max_workers = 10 integer value Set the number of engine executable tasks. Provide an integer value to limit the number of workers that can be instantiated on the hosts. In other words, this number defines the number of parallel tasks that can be executed at the same time by the taskflow engine. This value can be greater than one when the engine mode is set to parallel. Possible values: Integer value greater than or equal to 1 Related options: engine_mode 3.2. glance-scrubber.conf This section contains options for the /etc/glance/glance-scrubber.conf file. 3.2.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-scrubber.conf file. . Configuration option = Default value Type Description allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default daemon = False boolean value Run scrubber as a daemon. This boolean configuration option indicates whether scrubber should run as a long-running process that wakes up at regular intervals to scrub images. The wake up interval can be specified using the configuration option wakeup_time . If this configuration option is set to False , which is the default value, scrubber runs once to scrub images and exits. In this case, if the operator wishes to implement continuous scrubbing of images, scrubber needs to be scheduled as a cron job. Possible values: True False Related options: wakeup_time data_api = glance.db.sqlalchemy.api string value Python module path of data access API. Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed. Possible values: glance.db.sqlalchemy.api glance.db.registry.api glance.db.simple.api If this option is set to glance.db.sqlalchemy.api then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs. Setting this option to glance.db.registry.api will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability. Note In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option enable_v2_registry must be set to True . Finally, when this configuration option is set to glance.db.simple.api , image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing. Related options: enable_v2_api enable_v2_registry Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. delayed_delete = False boolean value Turn on/off delayed delete. Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time ). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data. Note When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage. Possible values: True False Related options: scrub_time wakeup_time scrub_pool_size digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None enable_v1_registry = True boolean value DEPRECATED FOR REMOVAL Deprecated since: Newton *Reason:*The Images (Glance) version 1 API has been DEPRECATED in the Newton release and will be removed on or after Pike release, following the standard OpenStack deprecation policy. Hence, the configuration options specific to the Images (Glance) v1 API are hereby deprecated and subject to removal. Operators are advised to deploy the Images (Glance) v2 API. enable_v2_api = True boolean value Deploy the v2 OpenStack Images API. When this option is set to True , Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API. NOTES: If this option is disabled, then the enable_v2_registry option, which is enabled by default, is also recommended to be disabled. Possible values: True False Related options: enable_v2_registry Deprecated since: Newton *Reason:*The Images (Glance) version 1 API has been DEPRECATED in the Newton release. It will be removed on or after Pike release, following the standard OpenStack deprecation policy. Once we remove the Images (Glance) v1 API, only the Images (Glance) v2 API can be deployed and will be enabled by default making this option redundant. enable_v2_registry = True boolean value Deploy the v2 API Registry service. When this option is set to True , the Registry service will be enabled in Glance for v2 API requests. NOTES: Use of Registry is optional in v2 API, so this option must only be enabled if both enable_v2_api is set to True and the data_api option is set to glance.db.registry.api . If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled. Possible values: True False Related options: enable_v2_api data_api Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html enabled_import_methods = ['glance-direct', 'web-download', 'copy-image'] list value List of enabled Image Import Methods fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. restore = None string value Restore the image status from pending_delete to active . This option is used by administrator to reset the image's status from pending_delete to active when the image is deleted by mistake and pending delete feature is enabled in Glance. Please make sure the glance-scrubber daemon is stopped before restoring the image to avoid image data inconsistency. Possible values: image's uuid scrub_pool_size = 1 integer value The size of thread pool to be used for scrubbing images. When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing. Possible values: Any non-zero positive integer Related options: delayed_delete scrub_time = 0 integer value The amount of time, in seconds, to delay image scrubbing. When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing. It is important to realize that this has storage implications. The larger the scrub_time , the longer the time to reclaim backend storage from deleted images. Possible values: Any non-negative integer Related options: delayed_delete show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy Deprecated since: Newton *Reason:*Use of this option, deprecated since Newton, is a security risk and will be removed once we figure out a way to satisfy those use cases that currently require it. An earlier announcement that the same functionality can be achieved with greater granularity by using policies is incorrect. You cannot work around this option via policy configuration at the present time, though that is the direction we believe the fix will take. Please keep an eye on the Glance release notes to stay up to date on progress in addressing this issue. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None wakeup_time = 300 integer value Time interval, in seconds, between scrubber runs in daemon mode. Scrubber can be run either as a cron job or daemon. When run as a daemon, this configuration time specifies the time period between two runs. When the scrubber wakes up, it fetches and scrubs all pending_delete images that are available for scrubbing after taking scrub_time into consideration. If the wakeup time is set to a large number, there may be a large number of images to be scrubbed for each run. Also, this impacts how quickly the backend storage is reclaimed. Possible values: Any non-negative integer Related options: daemon delayed_delete watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 3.2.2. database The following table outlines the options available under the [database] group in the /etc/glance/glance-scrubber.conf file. Table 3.25. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. use_tpool = False boolean value Enable the experimental use of thread pooling for all DB API calls 3.2.3. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-scrubber.conf file. Table 3.26. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev2::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_use_multipath = False boolean value Flag to identify multipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd sheepdog cinder vsphere Related Options: stores Deprecated since: Rocky Reason: This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None sheepdog_store_address = 127.0.0.1 host address value Address to bind the Sheepdog daemon to. Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the sheep is 127.0.0.1. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images. Possible values: A valid IPv4 address A valid IPv6 address A valid hostname Related Options: sheepdog_store_port Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. sheepdog_store_chunk_size = 64 integer value Chunk size for images to be stored in Sheepdog data store. Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes. When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance. Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance. Possible values: Positive integer value representing size in mebibytes. Related Options: None Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. sheepdog_store_port = 7000 port value Port number on which the sheep daemon will listen. Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images. Possible values: A valid port number (0 to 65535) Related Options: sheepdog_store_address Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd sheepdog cinder vmware Related Options: default_store Deprecated since: Rocky Reason: This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option swift_buffer_on_upload is set to True. * This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.2.4. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/glance/glance-scrubber.conf file. Table 3.27. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 3.2.5. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-scrubber.conf file. Table 3.28. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 3.3. glance-cache.conf This section contains options for the /etc/glance/glance-cache.conf file. 3.3.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-cache.conf file. . Configuration option = Default value Type Description admin_password = None string value The administrators password. If "use_user_token" is not in effect, then admin credentials can be specified. admin_tenant_name = None string value The tenant name of the administrative user. If "use_user_token" is not in effect, then admin tenant name can be specified. admin_user = None string value The administrators user name. If "use_user_token" is not in effect, then admin credentials can be specified. allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default auth_region = None string value The region for the authentication service. If "use_user_token" is not in effect and using keystone auth, then region name can be specified. auth_strategy = noauth string value The strategy to use for authentication. If "use_user_token" is not in effect, then auth strategy can be specified. auth_url = None string value The URL to the keystone service. If "use_user_token" is not in effect and using keystone auth, then URL of keystone can be specified. data_api = glance.db.sqlalchemy.api string value Python module path of data access API. Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed. Possible values: glance.db.sqlalchemy.api glance.db.registry.api glance.db.simple.api If this option is set to glance.db.sqlalchemy.api then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs. Setting this option to glance.db.registry.api will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability. Note In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option enable_v2_registry must be set to True . Finally, when this configuration option is set to glance.db.simple.api , image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing. Related options: enable_v2_api enable_v2_registry Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None enable_v1_registry = True boolean value DEPRECATED FOR REMOVAL Deprecated since: Newton *Reason:*The Images (Glance) version 1 API has been DEPRECATED in the Newton release and will be removed on or after Pike release, following the standard OpenStack deprecation policy. Hence, the configuration options specific to the Images (Glance) v1 API are hereby deprecated and subject to removal. Operators are advised to deploy the Images (Glance) v2 API. enable_v2_api = True boolean value Deploy the v2 OpenStack Images API. When this option is set to True , Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API. NOTES: If this option is disabled, then the enable_v2_registry option, which is enabled by default, is also recommended to be disabled. Possible values: True False Related options: enable_v2_registry Deprecated since: Newton *Reason:*The Images (Glance) version 1 API has been DEPRECATED in the Newton release. It will be removed on or after Pike release, following the standard OpenStack deprecation policy. Once we remove the Images (Glance) v1 API, only the Images (Glance) v2 API can be deployed and will be enabled by default making this option redundant. enable_v2_registry = True boolean value Deploy the v2 API Registry service. When this option is set to True , the Registry service will be enabled in Glance for v2 API requests. NOTES: Use of Registry is optional in v2 API, so this option must only be enabled if both enable_v2_api is set to True and the data_api option is set to glance.db.registry.api . If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled. Possible values: True False Related options: enable_v2_api data_api Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html enabled_import_methods = ['glance-direct', 'web-download', 'copy-image'] list value List of enabled Image Import Methods fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None image_cache_dir = None string value Base directory for image cache. This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete , invalid and queue . The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory. The queue`subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the `queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it'll be retried the time cache-prefetcher runs. Possible values: A valid path Related options: image_cache_sqlite_db image_cache_driver = sqlite string value The driver to use for image cache management. This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc. The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver . All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr . These drivers primarily differ in the way they store the information about cached images: The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed. Possible values: sqlite xattr Related options: None image_cache_max_size = 10737418240 integer value The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. Note This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit. Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here. Possible values: Any non-negative integer Related options: None image_cache_sqlite_db = cache.db string value The relative path to sqlite file database that will be used for image cache management. This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir . This is a lightweight database with just one table. Possible values: A valid relative path to sqlite file database Related options: image_cache_dir image_cache_stall_time = 86400 integer value The amount of time, in seconds, an incomplete image remains in the cache. Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it'll be removed by cache-cleaner on its run. It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space. Possible values: Any non-negative integer Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. registry_client_ca_file = None string value Absolute path to the Certificate Authority file. Provide a string value representing a valid absolute path to the certificate authority file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CA_FILE environment variable may be set to a filepath of the CA file. This option is ignored if the registry_client_insecure option is set to True . Possible values: String value representing a valid absolute path to the CA file. Related options: registry_client_protocol registry_client_insecure Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_cert_file = None string value Absolute path to the certificate file. Provide a string value representing a valid absolute path to the certificate file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CERT_FILE environment variable may be set to a filepath of the certificate file. Possible values: String value representing a valid absolute path to the certificate file. Related options: registry_client_protocol Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_insecure = False boolean value Set verification of the registry server certificate. Provide a boolean value to determine whether or not to validate SSL connections to the registry server. By default, this option is set to False and the SSL connections are validated. If set to True , the connection to the registry server is not validated via a certifying authority and the registry_client_ca_file option is ignored. This is the registry's equivalent of specifying --insecure on the command line using glanceclient for the API. Possible values: True False Related options: registry_client_protocol registry_client_ca_file Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_key_file = None string value Absolute path to the private key file. Provide a string value representing a valid absolute path to the private key file to use for establishing a secure connection to the registry server. Note This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_KEY_FILE environment variable may be set to a filepath of the key file. Possible values: String value representing a valid absolute path to the key file. Related options: registry_client_protocol Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_protocol = http string value Protocol to use for communication with the registry server. Provide a string value representing the protocol to use for communication with the registry server. By default, this option is set to http and the connection is not secure. This option can be set to https to establish a secure connection to the registry server. In this case, provide a key to use for the SSL connection using the registry_client_key_file option. Also include the CA file and cert file using the options registry_client_ca_file and registry_client_cert_file respectively. Possible values: http https Related options: registry_client_key_file registry_client_cert_file registry_client_ca_file Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_client_timeout = 600 integer value Timeout value for registry requests. Provide an integer value representing the period of time in seconds that the API server will wait for a registry request to complete. The default value is 600 seconds. A value of 0 implies that a request will never timeout. Possible values: Zero Positive integer Related options: None Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_host = 0.0.0.0 host address value Address the registry server is hosted on. Possible values: A valid IP or hostname Related options: None Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html registry_port = 9191 port value Port the registry server is listening on. Possible values: A valid port number Related options: None Deprecated since: Queens Reason: Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy Deprecated since: Newton *Reason:*Use of this option, deprecated since Newton, is a security risk and will be removed once we figure out a way to satisfy those use cases that currently require it. An earlier announcement that the same functionality can be achieved with greater granularity by using policies is incorrect. You cannot work around this option via policy configuration at the present time, though that is the direction we believe the fix will take. Please keep an eye on the Glance release notes to stay up to date on progress in addressing this issue. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. use_user_token = True boolean value Whether to pass through the user token when making requests to the registry. To prevent failures with token expiration during big files upload, it is recommended to set this parameter to False.If "use_user_token" is not in effect, then admin credentials can be specified. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 3.3.2. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-cache.conf file. Table 3.29. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev2::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_use_multipath = False boolean value Flag to identify multipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd sheepdog cinder vsphere Related Options: stores Deprecated since: Rocky Reason: This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network traffic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None sheepdog_store_address = 127.0.0.1 host address value Address to bind the Sheepdog daemon to. Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the sheep is 127.0.0.1. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images. Possible values: A valid IPv4 address A valid IPv6 address A valid hostname Related Options: sheepdog_store_port Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. sheepdog_store_chunk_size = 64 integer value Chunk size for images to be stored in Sheepdog data store. Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes. When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance. Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance. Possible values: Positive integer value representing size in mebibytes. Related Options: None Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. sheepdog_store_port = 7000 port value Port number on which the sheep daemon will listen. Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000. The Sheepdog daemon, also called sheep , manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images. Possible values: A valid port number (0 to 65535) Related Options: sheepdog_store_address Deprecated since: Train Reason: The Sheepdog project is no longer actively maintained. The Sheepdog driver is scheduled for removal in the U development cycle. stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd sheepdog cinder vmware Related Options: default_store Deprecated since: Rocky Reason: This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: * This is required only when the configuration option swift_buffer_on_upload is set to True. * This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.3.3. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-cache.conf file. Table 3.30. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check | [
"The values must be specified as: <group_name>.<event_name> For example: image.create,task.success,metadef_tag",
"'glance-direct', 'copy-image' and 'web-download' are enabled by default.",
"Related options: ** [DEFAULT]/node_staging_uri",
"'glance-direct', 'copy-image' and 'web-download' are enabled by default.",
"Related options: ** [DEFAULT]/node_staging_uri",
"'glance-direct', 'copy-image' and 'web-download' are enabled by default.",
"Related options: ** [DEFAULT]/node_staging_uri"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuration_reference/glance |
Chapter 10. Advanced migration options | Chapter 10. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. 10.1. Terminology Table 10.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 10.2. Migrating applications by using the command line You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration. 10.2.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 10.2.2. Creating a registry route for direct image migration For direct image migration, you must create a route to the exposed OpenShift image registry on all remote clusters. Prerequisites The OpenShift image registry must be exposed to external traffic on all remote clusters. The OpenShift Container Platform 4 registry is exposed by default. Procedure To create a route to an OpenShift Container Platform 4 registry, run the following command: USD oc create route passthrough --service=image-registry -n openshift-image-registry 10.2.3. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.12, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 10.2.3.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 10.2.3.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 10.2.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 10.2.3.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 10.2.3.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 10.2.3.2.1. NetworkPolicy configuration 10.2.3.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 10.2.3.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 10.2.3.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 10.2.3.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 10.2.3.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 10.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 10.2.3.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration 10.2.4. Migrating an application by using the MTC API You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API. Procedure Create a MigCluster CR manifest for the host cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF Create a Secret object manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF 1 Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command: USD oc sa get-token migration-controller -n openshift-migration | base64 -w 0 Create a MigCluster CR manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF 1 Specify the Cluster CR of the remote cluster. 2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false . CA certificates are not required or checked if true . 4 Specify the Secret object of the remote cluster. 5 Specify the URL of the remote cluster. Verify that all clusters are in a Ready state: USD oc describe MigCluster <cluster> Create a Secret object manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF 1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key: USD echo -n "<key>" | base64 -w 0 1 1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a MigStorage CR manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF 1 Specify the bucket name. 2 Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the MigStorage CR is in a Ready state: USD oc describe migstorage <migstorage> Create a MigPlan CR manifest: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF 1 Direct image migration is enabled if false . 2 Direct volume migration is enabled if false . 3 Specify the name of the MigStorage CR instance. 4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster instance. Verify that the MigPlan instance is in a Ready state: USD oc describe migplan <migplan> -n openshift-migration Create a MigMigration CR manifest to start the migration defined in the MigPlan instance: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF 1 Specify the MigPlan CR name. 2 The pods on the source cluster are stopped before migration if true . 3 A stage migration, which copies most of the data without stopping the application, is performed if true . 4 A completed migration is rolled back if true . Verify the migration by watching the MigMigration CR progress: USD oc watch migmigration <migmigration> -n openshift-migration The output resembles the following: Example output Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47 10.2.5. State migration You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application's state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster. State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC. If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC. You can perform a state migration between clusters or within the same cluster. Important State migration migrates only the components that constitute an application's state. If you want to migrate an entire namespace, use stage or cutover migration. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Additional resources See Excluding PVCs from migration to select PVCs for state migration. See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster. See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application's state. 10.3. Migration hooks You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration. A migration hook runs on a source or a target cluster at one of the following migration steps: PreBackup : Before resources are backed up on the source cluster. PostBackup : After resources are backed up on the source cluster. PreRestore : Before resources are restored on the target cluster. PostRestore : After resources are restored on the target cluster. You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container. Ansible playbook The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed. The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.8 . This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. Custom hook container You can use a custom hook container instead of the default Ansible image. 10.3.1. Writing an Ansible playbook for a migration hook You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest. The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster. 10.3.1.1. Ansible modules You can use the Ansible shell module to run oc commands. Example shell module - hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces You can use kubernetes.core modules, such as k8s_info , to interact with Kubernetes resources. Example k8s_facts module - hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: "{{ lookup( 'env', 'HOSTNAME') }}" register: pods - name: Print pod name debug: msg: "{{ pods.resources[0].metadata.name }}" You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container. Example fail module - hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: "fail" fail: msg: "Cause a failure" when: do_fail 10.3.1.2. Environment variables The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plugin. Example environment variables - hosts: localhost gather_facts: false tasks: - set_fact: namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}" - debug: msg: "{{ item }}" with_items: "{{ namespaces }}" - debug: msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}" 10.4. Migration plan options You can exclude, edit, and map components in the MigPlan custom resource (CR). 10.4.1. Excluding resources You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool. By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time. Procedure Edit the MigrationController custom resource manifest: USD oc edit migrationcontroller <migration_controller> -n openshift-migration Update the spec section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add the additional_excluded_resources parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2 ... 1 Add disable_image_migration: true to exclude image streams from the migration. imagestreams is added to the excluded_resources list in main.yml when the MigrationController pod restarts. 2 Add disable_pv_migration: true to exclude PVs from the migration plan. persistentvolumes and persistentvolumeclaims are added to the excluded_resources list in main.yml when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. 3 You can add OpenShift Container Platform resources that you want to exclude to the additional_excluded_resources list. Wait two minutes for the MigrationController pod to restart so that the changes are applied. Verify that the resource is excluded: USD oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1 The output contains the excluded resources: Example output name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims 10.4.2. Mapping namespaces If you map namespaces in the MigPlan custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration. Two source namespaces mapped to the same destination namespace spec: namespaces: - namespace_2 - namespace_1:namespace_2 If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name. Incorrect namespace mapping spec: namespaces: - namespace_1:namespace_1 Correct namespace reference spec: namespaces: - namespace_1 10.4.3. Excluding persistent volume claims You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action parameter of the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set it to skip : apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip 10.4.4. Mapping persistent volume claims You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs. You map PVCs by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the PVs have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1 1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration. 10.4.5. Editing persistent volume attributes After you create a MigPlan custom resource (CR), the MigrationController CR discovers the persistent volumes (PVs). The spec.persistentVolumes block and the status.destStorageClasses block are added to the MigPlan CR. You can edit the values in the spec.persistentVolumes.selection block. If you change values outside the spec.persistentVolumes.selection block, the values are overwritten when the MigPlan CR is reconciled by the MigrationController CR. Note The default value for the spec.persistentVolumes.selection.storageClass parameter is determined by the following logic: If the source cluster PV is Gluster or NFS, the default is either cephfs , for accessMode: ReadWriteMany , or cephrbd , for accessMode: ReadWriteOnce . If the PV is neither Gluster nor NFS or if cephfs or cephrbd are not available, the default is a storage class for the same provisioner. If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster. You can change the storageClass value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If the storageClass value is empty, the PV will have no storage class after migration. This option is appropriate if, for example, you want to move the PV to an NFS volume on the destination cluster. Prerequisites MigPlan CR is in a Ready state. Procedure Edit the spec.persistentVolumes.selection values in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs 1 Allowed values are move , copy , and skip . If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value is copy . 2 Allowed values are snapshot and filesystem . Default value is filesystem . 3 The verify parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it to false . 4 You can change the default value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If no value is specified, the PV will have no storage class after migration. 5 Allowed values are ReadWriteOnce and ReadWriteMany . If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in the MigPlan CR. You cannot edit it by using the MTC web console. 10.4.6. Converting storage classes in the MTC web console You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on the cluster on which MTC is running. You must add the cluster to the MTC web console. Procedure In the left-side navigation pane of the OpenShift Container Platform web console, click Projects . In the list of projects, click your project. The Project details page opens. Click the DeploymentConfig name. Note the name of its running pod. Open the YAML tab of the project. Find the PVs and note the names of their corresponding persistent volume claims (PVCs). In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must contain 3 to 63 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). From the Migration type menu, select Storage class conversion . From the Source cluster list, select the desired cluster for storage class conversion. Click . The Namespaces page opens. Select the required project. Click . The Persistent volumes page opens. The page displays the PVs in the project, all selected by default. For each PV, select the desired target storage class. Click . The wizard validates the new migration plan and shows that it is ready. Click Close . The new plan appears on the Migration plans page. To start the conversion, click the options menu of the new plan. Under Migrations , two options are displayed, Stage and Cutover . Note Cutover migration updates PVC references in the applications. Stage migration does not update PVC references in the applications. Select the desired option. Depending on which option you selected, the Stage migration or Cutover migration notification appears. Click Migrate . Depending on which option you selected, the Stage started or Cutover started message appears. To see the status of the current migration, click the number in the Migrations column. The Migrations page opens. To see more details on the current migration and monitor its progress, select the migration from the Type column. The Migration details page opens. When the migration progresses to the DirectVolume step and the status of the step becomes Running Rsync Pods to migrate Persistent Volume data , you can click View details and see the detailed status of the copies. In the breadcrumb bar, click Stage or Cutover and wait for all steps to complete. Open the PersistentVolumeClaims tab of the OpenShift Container Platform web console. You can see new PVCs with the names of the initial PVCs but ending in new , which are using the target storage class. In the left-side navigation pane, click Pods . See that the pod of your project is running again. Additional resources For details about the move and copy actions, see MTC workflow . For details about the skip action, see Excluding PVCs from migration . For details about the file system and snapshot copy methods, see About data copy methods . 10.4.7. Performing a state migration of Kubernetes objects by using the MTC API After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application. You do this by configuring MigPlan custom resource (CR) fields to provide a list of Kubernetes resources with an additional label selector to further filter those resources, and then performing a migration by creating a MigMigration CR. The MigPlan resource is closed after the migration. Note Selecting Kubernetes resources is an API-only feature. You must update the MigPlan CR and create a MigMigration CR for it by using the CLI. The MTC web console does not support migrating Kubernetes objects. Note After migration, the closed parameter of the MigPlan CR is set to true . You cannot create another MigMigration CR for this MigPlan CR. You add Kubernetes objects to the MigPlan CR by using one of the following options: Adding the Kubernetes objects to the includedResources section. When the includedResources field is specified in the MigPlan CR, the plan takes a list of group-kind as input. Only resources present in the list are included in the migration. Adding the optional labelSelector parameter to filter the includedResources in the MigPlan . When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list of Secret and ConfigMap resources by using the label app: frontend as a filter. Procedure Update the MigPlan CR to include Kubernetes resources and, optionally, to filter the included resources by adding the labelSelector parameter: To update the MigPlan CR to include Kubernetes resources: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" 1 Specify the Kubernetes object, for example, Secret or ConfigMap . Optional: To filter the included resources by adding the labelSelector parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label> 2 1 Specify the Kubernetes object, for example, Secret or ConfigMap . 2 Specify the label of the resources to migrate, for example, app: frontend . Create a MigMigration CR to migrate the selected Kubernetes resources. Verify that the correct MigPlan is referenced in migPlanRef : apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false 10.5. Migration controller options You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController custom resource (CR) for large migrations and improved performance. 10.5.1. Increasing limits for large migrations You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC). Important You must test these changes before you perform a migration in a production environment. Procedure Edit the MigrationController custom resource (CR) manifest: USD oc edit migrationcontroller -n openshift-migration Update the following parameters: ... mig_controller_limits_cpu: "1" 1 mig_controller_limits_memory: "10Gi" 2 ... mig_controller_requests_cpu: "100m" 3 mig_controller_requests_memory: "350Mi" 4 ... mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7 ... 1 Specifies the number of CPUs available to the MigrationController CR. 2 Specifies the amount of memory available to the MigrationController CR. 3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). 4 Specifies the amount of memory available for MigrationController CR requests. 5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes. If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan. 10.5.2. Enabling persistent volume resizing for direct volume migration You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster. When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster. A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3% . This means that PV resizing occurs when the disk usage of a PV is more than 97% . You can increase this threshold so that PV resizing occurs at a lower disk usage level. PVC capacity is calculated according to the following criteria: If the requested storage capacity ( spec.resources.requests.storage ) of the PVC is not equal to its actual provisioned capacity ( status.capacity.storage ), the greater value is used. If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used. Prerequisites The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands. Procedure Log in to the host cluster. Enable PV resizing by patching the MigrationController CR: USD oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ 1 --type='merge' -n openshift-migration 1 Set the value to false to disable PV resizing. Optional: Update the pv_resizing_threshold parameter to increase the threshold: USD oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ 1 --type='merge' -n openshift-migration 1 The default value is 3 . When the threshold is exceeded, the following status message is displayed in the MigPlan CR status: status: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequired Note For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. ( BZ#1973148 ) 10.5.3. Enabling cached Kubernetes clients You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency. Note Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates. You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients. Procedure Enable cached clients by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]' Optional: Increase the MigrationController CR memory limits by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]' Optional: Increase the MigrationController CR memory requests by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]' | [
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe MigCluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migration_toolkit_for_containers/advanced-migration-options-mtc |
Chapter 1. Overview | Chapter 1. Overview Power management has been one of our focus points for improvements for Red Hat Enterprise Linux 7. Limiting the power used by computer systems is one of the most important aspects of green IT (environmentally friendly computing), a set of considerations that also encompasses the use of recyclable materials, the environmental impact of hardware production, and environmental awareness in the design and deployment of systems. In this document, we provide guidance and information regarding power management of your systems running Red Hat Enterprise Linux 7. 1.1. Importance of Power Management At the core of power management is an understanding of how to effectively optimize energy consumption of each system component. This entails studying the different tasks that your system performs, and configuring each component to ensure that its performance is just right for the job. The main motivator for power management is: reducing overall power consumption to save cost The proper use of power management results in: heat reduction for servers and computing centers reduced secondary costs, including cooling, space, cables, generators, and uninterruptible power supplies (UPS) extended battery life for laptops lower carbon dioxide output meeting government regulations or legal requirements regarding Green IT, for example Energy Star meeting company guidelines for new systems As a rule, lowering the power consumption of a specific component (or of the system as a whole) will lead to lower heat and naturally, performance. As such, you should thoroughly study and test the decrease in performance afforded by any configurations you make, especially for mission-critical systems. By studying the different tasks that your system performs, and configuring each component to ensure that its performance is just sufficient for the job, you can save energy, generate less heat, and optimize battery life for laptops. Many of the principles for analysis and tuning of a system in regard to power consumption are similar to those for performance tuning. To some degree, power management and performance tuning are opposite approaches to system configuration, because systems are usually optimized either towards performance or power. This manual describes the tools that Red Hat provides and the techniques we have developed to help you in this process. Red Hat Enterprise Linux 7 already comes with a lot of new power management features that are enabled by default. They were all selectively chosen to not impact the performance of a typical server or desktop use case. However, for very specific use cases where maximum throughput, lowest latency, or highest CPU performance is absolutely required, a review of those defaults might be necessary. To decide whether you should optimize your machines using the techniques described in this document, ask yourself a few questions: Q: Must I optimize? Q: How much do I need to optimize? Q: Will optimization reduce system performance to an unacceptable level? Q: Will the time and resources spent to optimize the system outweigh the gains achieved? Q: Must I optimize? A: The importance of power optimization depends on whether your company has guidelines that need to be followed or if there are any regulations that you have to fulfill. Q: How much do I need to optimize? A: Several of the techniques we present do not require you to go through the whole process of auditing and analyzing your machine in detail but instead offer a set of general optimizations that typically improve power usage. Those will of course typically not be as good as a manually audited and optimized system, but provide a good compromise. Q: Will optimization reduce system performance to an unacceptable level? A: Most of the techniques described in this document impact the performance of your system noticeably. If you choose to implement power management beyond the defaults already in place in Red Hat Enterprise Linux 7, you should monitor the performance of the system after power optimization and decide if the performance loss is acceptable. Q: Will the time and resources spent to optimize the system outweigh the gains achieved? A: Optimizing a single system manually following the whole process is typically not worth it as the time and cost spent doing so is far higher than the typical benefit you would get over the lifetime of a single machine. On the other hand if you for example roll out 10000 desktop systems to your offices all using the same configuration and setup then creating one optimized setup and applying that to all 10000 machines is most likely a good idea. The following sections will explain how optimal hardware performance benefits your system in terms of energy consumption. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/overview |
Chapter 15. Dynamic programming languages, web servers, database servers | Chapter 15. Dynamic programming languages, web servers, database servers 15.1. Dynamic programming languages 15.1.1. Notable changes in Python 15.1.1.1. Python 3 is the default Python implementation in RHEL 8 Red Hat Enterprise Linux 8 is distributed with several versions of Python 3 . Python 3.6 is going to be supported for the whole life cycle of RHEL 8. The respective package might not be installed by default. Python 2.7 is available in the python2 package. However, Python 2 will have a shorter life cycle and its aim is to facilitate a smoother transition to Python 3 for customers. For details, see Python versions . Neither the default python package nor the unversioned /usr/bin/python executable is distributed with RHEL 8. Customers are advised to use python3 or python2 directly. Alternatively, administrators can configure the unversioned python command using the alternatives command. See Configuring the unversioned Python . Additional resources Installing and using Python Packaging Python 3 RPMs 15.1.1.2. Migrating from Python 2 to Python 3 As a developer, you may want to migrate your former code that is written in Python 2 to Python 3. For more information about how to migrate large code bases to Python 3, see The Conservative Python 3 Porting Guide . Note that after this migration, the original Python 2 code becomes interpretable by the Python 3 interpreter and stays interpretable for the Python 2 interpreter as well. 15.1.1.3. Configuring the unversioned Python System administrators can configure the unversioned python command, located at /usr/bin/python , using the alternatives command. Note that the required package, python3 , python38 , python39 , python3.11 , python3.12 , or python2 , must be installed before configuring the unversioned command to the respective version. Important The /usr/bin/python executable is controlled by the alternatives system. Any manual changes may be overwritten upon an update. Additional Python-related commands, such as pip3 , do not have configurable unversioned variants. 15.1.1.3.1. Configuring the unversioned python command directly You can configure the unversioned python command directly to a selected version of Python. Prerequisites Ensure that the required version of Python is installed. Procedure To configure the unversioned python command to Python 3.6, use: To configure the unversioned python command to Python 3.8, use: To configure the unversioned python command to Python 3.9, use: To configure the unversioned python command to Python 3.11, use: To configure the unversioned python command to Python 3.12, use: To configure the unversioned python command to Python 2, use: 15.1.1.3.2. Configuring the unversioned python command to the required Python version interactively You can configure the unversioned python command to the required Python version interactively. Prerequisites Ensure that the required version of Python is installed. Procedure To configure the unversioned python command interactively, use: Select the required version from the provided list. To reset this configuration and remove the unversioned python command, use: 15.1.1.3.3. Additional resources alternatives(8) and unversioned-python(1) man pages on your system 15.1.1.4. Handling interpreter directives in Python scripts In Red Hat Enterprise Linux 8, executable Python scripts are expected to use interpreter directives (also known as hashbangs or shebangs) that explicitly specify at a minimum the major Python version. For example: The /usr/lib/rpm/redhat/brp-mangle-shebangs buildroot policy (BRP) script is run automatically when building any RPM package, and attempts to correct interpreter directives in all executable files. The BRP script generates errors when encountering a Python script with an ambiguous interpreter directive, such as: or 15.1.1.4.1. Modifying interpreter directives in Python scripts Modify interpreter directives in the Python scripts that cause the build errors at RPM build time. Prerequisites Some of the interpreter directives in your Python scripts cause a build error. Procedure To modify interpreter directives, complete one of the following tasks: Apply the pathfix.py script from the platform-python-devel package: Note that multiple PATH s can be specified. If a PATH is a directory, pathfix.py recursively scans for any Python scripts matching the pattern ^[a-zA-Z0-9_]+\.pyUSD , not only those with an ambiguous interpreter directive. Add this command to the %prep section or at the end of the %install section. Modify the packaged Python scripts so that they conform to the expected format. For this purpose, pathfix.py can be used outside the RPM build process, too. When running pathfix.py outside an RPM build, replace %{__python3} from the example above with a path for the interpreter directive, such as /usr/bin/python3 . If the packaged Python scripts require a version other than Python 3.6, adjust the preceding commands to include the required version. 15.1.1.4.2. Changing /usr/bin/python3 interpreter directives in your custom packages By default, interpreter directives in the form of /usr/bin/python3 are replaced with interpreter directives pointing to Python from the platform-python package, which is used for system tools with Red Hat Enterprise Linux. You can change the /usr/bin/python3 interpreter directives in your custom packages to point to a specific version of Python that you have installed from the AppStream repository. Procedure To build your package for a specific version of Python, add the python* -rpm-macros subpackage of the respective python package to the BuildRequires section of the spec file. For example, for Python 3.6, include the following line: As a result, the /usr/bin/python3 interpreter directives in your custom package are automatically converted to /usr/bin/python3.6 . Note To prevent the BRP script from checking and modifying interpreter directives, use the following RPM directive: 15.1.1.5. Python binding of the net-snmp package is unavailable The Net-SNMP suite of tools does not provide binding for Python 3 , which is the default Python implementation in RHEL 8. Consequently, python-net-snmp , python2-net-snmp , or python3-net-snmp packages are unavailable in RHEL 8. 15.1.2. Notable changes in PHP Red Hat Enterprise Linux 8 is distributed with PHP 7.2 . This version introduces the following major changes over PHP 5.4 , which is available in RHEL 7: PHP uses FastCGI Process Manager (FPM) by default (safe for use with a threaded httpd ) The php_value and php-flag variables should no longer be used in the httpd configuration files; they should be set in pool configuration instead: /etc/php-fpm.d/*.conf PHP script errors and warnings are logged to the /var/log/php-fpm/www-error.log file instead of /var/log/httpd/error.log When changing the PHP max_execution_time configuration variable, the httpd ProxyTimeout setting should be increased to match The user running PHP scripts is now configured in the FPM pool configuration (the /etc/php-fpm.d/www.conf file; the apache user is the default) The php-fpm service needs to be restarted after a configuration change or after a new extension is installed The zip extension has been moved from the php-common package to a separate package, php-pecl-zip The following extensions have been removed: aspell mysql (note that the mysqli and pdo_mysql extensions are still available, provided by php-mysqlnd package) memcache 15.1.3. Notable changes in Perl Perl 5.26 , distributed with RHEL 8, introduces the following changes over the version available in RHEL 7: Unicode 9.0 is now supported. New op-entry , loading-file , and loaded-file SystemTap probes are provided. Copy-on-write mechanism is used when assigning scalars for improved performance. The IO::Socket::IP module for handling IPv4 and IPv6 sockets transparently has been added. The Config::Perl::V module to access perl -V data in a structured way has been added. A new perl-App-cpanminus package has been added, which contains the cpanm utility for getting, extracting, building, and installing modules from the Comprehensive Perl Archive Network (CPAN) repository. The current directory . has been removed from the @INC module search path for security reasons. The do statement now returns a deprecation warning when it fails to load a file because of the behavioral change described above. The do subroutine(LIST) call is no longer supported and results in a syntax error. Hashes are randomized by default now. The order in which keys and values are returned from a hash changes on each perl run. To disable the randomization, set the PERL_PERTURB_KEYS environment variable to 0 . Unescaped literal { characters in regular expression patterns are no longer permissible. Lexical scope support for the USD_ variable has been removed. Using the defined operator on an array or a hash results in a fatal error. Importing functions from the UNIVERSAL module results in a fatal error. The find2perl , s2p , a2p , c2ph , and pstruct tools have been removed. The USD{^ENCODING} facility has been removed. The encoding pragma's default mode is no longer supported. To write source code in other encoding than UTF-8 , use the encoding's Filter option. The perl packaging is now aligned with upstream. The perl package installs also core modules and is suitable for development. On production systems, use the perl-interpreter package, which contains the main /usr/bin/perl interpreter. In releases, the perl package included just a minimal interpreter, whereas the perl-core package included both the interpreter and the core modules. The IO::Socket::SSL Perl module no longer loads a certificate authority certificate from the ./certs/my-ca.pem file or the ./ca directory, a server private key from the ./certs/server-key.pem file, a server certificate from the ./certs/server-cert.pem file, a client private key from the ./certs/client-key.pem file, and a client certificate from the ./certs/client-cert.pem file. Specify the paths to the files explicitly instead. 15.1.4. Notable changes in Ruby RHEL 8 provides Ruby 2.5 , which introduces numerous new features and enhancements over Ruby 2.0.0 available in RHEL 7. Notable changes include: Incremental garbage collector has been added. The Refinements syntax has been added. Symbols are now garbage collected. The USDSAFE=2 and USDSAFE=3 safe levels are now obsolete. The Fixnum and Bignum classes have been unified into the Integer class. Performance has been improved by optimizing the Hash class, improved access to instance variables, and the Mutex class being smaller and faster. Certain old APIs have been deprecated. Bundled libraries, such as RubyGems , Rake , RDoc , Psych , Minitest , and test-unit , have been updated. Other libraries, such as mathn , DL , ext/tk , and XMLRPC , which were previously distributed with Ruby , are deprecated or no longer included. The SemVer versioning scheme is now used for Ruby versioning. 15.1.5. Notable changes in SWIG RHEL 8 includes the Simplified Wrapper and Interface Generator (SWIG) version 3.0, which provides numerous new features, enhancements, and bug fixes over the version 2.0 distributed in RHEL 7. Most notably, support for the C++11 standard has been implemented. SWIG now supports also Go 1.6 , PHP 7 , Octave 4.2 , and Python 3.5 . 15.1.6. Node.js new in RHEL Node.js , a software development platform for building fast and scalable network applications in the JavaScript programming language, is provided for the first time in RHEL. It was previously available only as a Software Collection. RHEL 8 provides Node.js 10 . 15.2. Tcl Tool command language (Tcl) is a dynamic programming language. The interpreter for this language, together with the C library, is provided by the tcl package. Using Tcl paired with Tk ( Tcl/Tk ) enables creating cross-platform GUI applications. Tk is provided by the tk package. Note that Tk can refer to any of the following: A programming toolkit for multiple languages A Tk C library bindings available for multiple languages, such as C, Ruby, Perl and Python A wish interpreter that instantiates a Tk console A Tk extension that adds a number of new commands to a particular Tcl interpreter 15.2.1. Notable changes in Tcl/Tk 8.6 RHEL 8 is distributed with Tcl/Tk version 8.6 , which provides multiple notable changes over Tcl/Tk version 8.5 : Object-oriented programming support Stackless evaluation implementation Enhanced exceptions handling Collection of third-party packages built and installed with Tcl Multi-thread operations enabled SQL database-powered scripts support IPv6 networking support Built-in Zlib compression List processing Two new commands, lmap and dict map are available, which allow the expression of transformations over Tcl containers. Stacked channels by script Two new commands, chan push and chan pop are available, which allow to add or remove transformations to or from I/O channels. For more detailed information about Tcl/Tk version 8.6 changes and new feaures, see the following resources: Configuring basic system settings Changes in Tcl/Tk 8.6 If you need to migrate to Tcl/Tk 8.6 , see Migration path for users scripting their tasks with Tcl/Tk . 15.3. Web servers 15.3.1. Notable changes in the Apache HTTP Server The Apache HTTP Server has been updated from version 2.4.6 in RHEL 7 to version 2.4.37 in RHEL 8. This updated version includes several new features, but maintains backwards compatibility with the RHEL 7 version at the level of configuration and Application Binary Interface (ABI) of external modules. New features include: HTTP/2 support is now provided by the mod_http2 package, which is a part of the httpd module. systemd socket activation is supported. See httpd.socket(8) man page for more details. Multiple new modules have been added: mod_proxy_hcheck - a proxy health-check module mod_proxy_uwsgi - a Web Server Gateway Interface (WSGI) proxy mod_proxy_fdpass - provides support for the passing the socket of the client to another process mod_cache_socache - an HTTP cache using, for example, memcache backend mod_md - an ACME protocol SSL/TLS certificate service The following modules now load by default: mod_request mod_macro mod_watchdog A new subpackage, httpd-filesystem , has been added, which contains the basic directory layout for the Apache HTTP Server including the correct permissions for the directories. Instantiated service support, [email protected] has been introduced. See the httpd.service man page for more information. A new httpd-init.service replaces the %post script to create a self-signed mod_ssl key pair. Automated TLS certificate provisioning and renewal using the Automatic Certificate Management Environment (ACME) protocol is now supported with the mod_md package (for use with certificate providers such as Let's Encrypt ). The Apache HTTP Server now supports loading TLS certificates and private keys from hardware security tokens directly from PKCS#11 modules. As a result, a mod_ssl configuration can now use PKCS#11 URLs to identify the TLS private key, and, optionally, the TLS certificate in the SSLCertificateKeyFile and SSLCertificateFile directives. A new ListenFree directive in the /etc/httpd/conf/httpd.conf file is now supported. Similarly to the Listen directive, ListenFree provides information about IP addresses, ports, or IP address-and-port combinations that the server listens to. However, with ListenFree , the IP_FREEBIND socket option is enabled by default. Hence, httpd is allowed to bind to a nonlocal IP address or to an IP address that does not exist yet. This allows httpd to listen on a socket without requiring the underlying network interface or the specified dynamic IP address to be up at the time when httpd is trying to bind to it. Note that the ListenFree directive is currently available only in RHEL 8. For more details on ListenFree , see the following table: Table 15.1. ListenFree directive's syntax, status, and modules Syntax Status Modules ListenFree [IP-address:]portnumber [protocol] MPM event, worker, prefork, mpm_winnt, mpm_netware, mpmt_os2 Other notable changes include: The following modules have been removed: mod_file_cache mod_nss Use mod_ssl as a replacement. For details about migrating from mod_nss , see the Exporting a private key and certificates from an NSS database to use them in an Apache web server configuration section in the Deploying different types of servers documentation. mod_perl The default type of the DBM authentication database used by the Apache HTTP Server in RHEL 8 has been changed from SDBM to db5 . The mod_wsgi module for the Apache HTTP Server has been updated to Python 3. WSGI applications are now supported only with Python 3, and must be migrated from Python 2. The multi-processing module (MPM) configured by default with the Apache HTTP Server has changed from a multi-process, forked model (known as prefork ) to a high-performance multi-threaded model, event . Any third-party modules that are not thread-safe need to be replaced or removed. To change the configured MPM, edit the /etc/httpd/conf.modules.d/00-mpm.conf file. See the httpd.service(8) man page for more information. The minimum UID and GID allowed for users by suEXEC are now 1000 and 500, respectively (previously 100 and 100). The /etc/sysconfig/httpd file is no longer a supported interface for setting environment variables for the httpd service. The httpd.service(8) man page has been added for the systemd service. Stopping the httpd service now uses a "graceful stop" by default. The mod_auth_kerb module has been replaced by the mod_auth_gssapi module. For instructions on deploying, see Setting up the Apache HTTP web server . 15.3.2. The nginx web server new in RHEL RHEL 8 introduces nginx 1.14 , a web and proxy server supporting HTTP and other protocols, with a focus on high concurrency, performance, and low memory usage. nginx was previously available only as a Software Collection. The nginx web server now supports loading TLS private keys from hardware security tokens directly from PKCS#11 modules. As a result, an nginx configuration can use PKCS#11 URLs to identify the TLS private key in the ssl_certificate_key directive. 15.3.3. Apache Tomcat removed in RHEL 8.0, reintroduced in RHEL 8.8 The Apache Tomcat server was removed from Red Hat Enterprise Linux 8.0 and reintroduced in RHEL 8.8. Tomcat is the servlet container that is used in the official Reference Implementation for the Java Servlet and JavaServer Pages technologies. The Java Servlet and JavaServer Pages specifications are developed by Sun under the Java Community Process. Tomcat is developed in an open and participatory environment and released under the Apache Software License version 2.0. Users of earlier minor versions than RHEL 8.8 who require a servlet container can use the JBoss Web Server . 15.4. Proxy caching servers 15.4.1. Varnish Cache new in RHEL Varnish Cache , a high-performance HTTP reverse proxy, is provided for the first time in RHEL. It was previously available only as a Software Collection. Varnish Cache stores files or fragments of files in memory that are used to reduce the response time and network bandwidth consumption on future equivalent requests. RHEL 8.0 is distributed with Varnish Cache 6.0 . 15.4.2. Notable changes in Squid RHEL 8.0 is distributed with Squid 4.4 , a high-performance proxy caching server for web clients, supporting FTP, Gopher, and HTTP data objects. This release provides numerous new features, enhancements, and bug fixes over the version 3.5 available in RHEL 7. Notable changes include: Configurable helper queue size Changes to helper concurrency channels Changes to the helper binary Secure Internet Content Adaptation Protocol (ICAP) Improved support for Symmetric Multi Processing (SMP) Improved process management Removed support for SSL Removed Edge Side Includes (ESI) custom parser Multiple configuration changes 15.5. Database servers RHEL 8 provides the following database servers: MySQL 8.0 , a multi-user, multi-threaded SQL database server. It consists of the MySQL server daemon, mysqld , and many client programs. MariaDB 10.3 , a multi-user, multi-threaded SQL database server. For all practical purposes, MariaDB is binary-compatible with MySQL . PostgreSQL 10 and PostgreSQL 9.6 , an advanced object-relational database management system (DBMS). Redis 5 , an advanced key-value store. It is often referred to as a data structure server because keys can contain strings, hashes, lists, sets, and sorted sets. Redis is provided for the first time in RHEL. Note that the NoSQL MongoDB database server is not included in RHEL 8.0 because it uses the Server Side Public License (SSPL). Database servers are not installable in parallel The mariadb and mysql modules cannot be installed in parallel in RHEL 8.0 due to conflicting RPM packages. By design, it is impossible to install more than one version (stream) of the same module in parallel. For example, you need to choose only one of the available streams from the postgresql module, either 10 (default) or 9.6 . Parallel installation of components is possible in Red Hat Software Collections for RHEL 6 and RHEL 7. In RHEL 8, different versions of database servers can be used in containers. 15.5.1. Notable changes in MariaDB 10.3 MariaDB 10.3 provides numerous new features over the version 5.5 distributed in RHEL 7, such as: Common table expressions System-versioned tables FOR loops Invisible columns Sequences Instant ADD COLUMN for InnoDB Storage-engine independent column compression Parallel replication Multi-source replication In addition, the new mariadb-connector-c packages provide a common client library for MySQL and MariaDB . This library is usable with any version of the MySQL and MariaDB database servers. As a result, the user is able to connect one build of an application to any of the MySQL and MariaDB servers distributed with RHEL 8. Other notable changes include: MariaDB Galera Cluster , a synchronous multi-source cluster, is now a standard part of MariaDB . InnoDB is used as the default storage engine instead of XtraDB . The mariadb-bench subpackage has been removed. The default allowed level of the plug-in maturity has been changed to one level less than the server maturity. As a result, plug-ins with a lower maturity level that were previously working, will no longer load. See also Using MariaDB on Red Hat Enterprise Linux 8 . 15.5.2. Notable changes in MySQL 8.0 RHEL 8 is distributed with MySQL 8.0 , which provides, for example, the following enhancements: MySQL now incorporates a transactional data dictionary, which stores information about database objects. MySQL now supports roles, which are collections of privileges. The default character set has been changed from latin1 to utf8mb4 . Support for common table expressions, both nonrecursive and recursive, has been added. MySQL now supports window functions, which perform a calculation for each row from a query, using related rows. InnoDB now supports the NOWAIT and SKIP LOCKED options with locking read statements. GIS-related functions have been improved. JSON functionality has been enhanced. The new mariadb-connector-c packages provide a common client library for MySQL and MariaDB . This library is usable with any version of the MySQL and MariaDB database servers. As a result, the user is able to connect one build of an application to any of the MySQL and MariaDB servers distributed with RHEL 8. In addition, the MySQL 8.0 server distributed with RHEL 8 is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in RHEL 8 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: 15.5.3. Notable changes in PostgreSQL RHEL 8.0 provides two versions of the PostgreSQL database server, distributed in two streams of the postgresql module: PostgreSQL 10 (the default stream) and PostgreSQL 9.6 . RHEL 7 includes PostgreSQL version 9.2. Notable changes in PostgreSQL 9.6 are, for example: Parallel execution of the sequential operations: scan , join , and aggregate Enhancements to synchronous replication Improved full-text search enabling users to search for phrases The postgres_fdw data federation driver now supports remote join , sort , UPDATE , and DELETE operations Substantial performance improvements, especially regarding scalability on multi-CPU-socket servers Major enhancements in PostgreSQL 10 include: Logical replication using the publish and subscribe keywords Stronger password authentication based on the SCRAM-SHA-256 mechanism Declarative table partitioning Improved query parallelism Significant general performance improvements Improved monitoring and control See also Using PostgreSQL on Red Hat Enterprise Linux 8 . | [
"alternatives --set python /usr/bin/python3",
"alternatives --set python /usr/bin/python3.8",
"alternatives --set python /usr/bin/python3.9",
"alternatives --set python /usr/bin/python3.11",
"alternatives --set python /usr/bin/python3.12",
"alternatives --set python /usr/bin/python2",
"alternatives --config python",
"alternatives --auto python",
"#!/usr/bin/python3 #!/usr/bin/python3.6 #!/usr/bin/python3.8 #!/usr/bin/python3.9 #!/usr/bin/python3.11 #!/usr/bin/python3.12 #!/usr/bin/python2",
"#!/usr/bin/python",
"#!/usr/bin/env python",
"pathfix.py -pn -i %{__python3} PATH ...",
"BuildRequires: python36-rpm-macros",
"%undefine __brp_mangle_shebangs",
"[mysqld] default_authentication_plugin=caching_sha2_password"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/dynamic-programming-languages-web-servers-database-servers_considerations-in-adopting-RHEL-8 |
Chapter 7. Management of monitoring stack using the Ceph Orchestrator | Chapter 7. Management of monitoring stack using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to deploy monitoring and alerting stack. The monitoring stack consists of Prometheus, Prometheus exporters, Prometheus Alertmanager, and Grafana. Users need to either define these services with Cephadm in a YAML configuration file, or they can use the command line interface to deploy them. When multiple services of the same type are deployed, a highly-available setup is deployed. The node exporter is an exception to this rule. Note Red Hat Ceph Storage does not support custom images for deploying monitoring services such as Prometheus, Grafana, Alertmanager, and node-exporter. The following monitoring services can be deployed with Cephadm: Prometheus is the monitoring and alerting toolkit. It collects the data provided by Prometheus exporters and fires preconfigured alerts if predefined thresholds have been reached. The Prometheus manager module provides a Prometheus exporter to pass on Ceph performance counters from the collection point in ceph-mgr . The Prometheus configuration, including scrape targets, such as metrics providing daemons, is set up automatically by Cephadm. Cephadm also deploys a list of default alerts, for example, health error, 10% OSDs down, or pgs inactive. Alertmanager handles alerts sent by the Prometheus server. It deduplicates, groups, and routes the alerts to the correct receiver. By default, the Ceph dashboard is automatically configured as the receiver. The Alertmanager handles alerts sent by the Prometheus server. Alerts can be silenced using the Alertmanager, but silences can also be managed using the Ceph Dashboard. Grafana is a visualization and alerting software. The alerting functionality of Grafana is not used by this monitoring stack. For alerting, the Alertmanager is used. By default, traffic to Grafana is encrypted with TLS. You can either supply your own TLS certificate or use a self-signed one. If no custom certificate has been configured before Grafana has been deployed, then a self-signed certificate is automatically created and configured for Grafana. Custom certificates for Grafana can be configured using the following commands: Syntax Node exporter is an exporter for Prometheus which provides data about the node on which it is installed. It is recommended to install the node exporter on all nodes. This can be done using the monitoring.yml file with the node-exporter service type. 7.1. Deploying the monitoring stack using the Ceph Orchestrator The monitoring stack consists of Prometheus, Prometheus exporters, Prometheus Alertmanager, Grafana, and Ceph Exporter. Ceph Dashboard makes use of these components to store and visualize detailed metrics on cluster usage and performance. You can deploy the monitoring stack using the service specification in YAML file format. All the monitoring services can have the network and port they bind to configured in the yml file. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Procedure Enable the prometheus module in the Ceph Manager daemon. This exposes the internal Ceph metrics so that Prometheus can read them: Example Important Ensure this command is run before Prometheus is deployed. If the command was not run before the deployment, you must redeploy Prometheus to update the configuration: Navigate to the following directory: Syntax Example Note If the directory monitoring does not exist, create it. Create the monitoring.yml file: Example Edit the specification file with a content similar to the following example: Example Note Ensure the monitoring stack components alertmanager , prometheus , and grafana are deployed on the same host. The node-exporter and ceph-exporter components should be deployed on all the hosts. Apply monitoring services: Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Important Prometheus, Grafana, and the Ceph dashboard are all automatically configured to talk to each other, resulting in a fully functional Grafana integration in the Ceph dashboard. 7.2. Removing the monitoring stack using the Ceph Orchestrator You can remove the monitoring stack using the ceph orch rm command. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Log into the Cephadm shell: Example Use the ceph orch rm command to remove the monitoring stack: Syntax Example Check the status of the process: Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the monitoring stack using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. | [
"ceph config-key set mgr/cephadm/ HOSTNAME /grafana_key -i PRESENT_WORKING_DIRECTORY /key.pem ceph config-key set mgr/cephadm/ HOSTNAME /grafana_crt -i PRESENT_WORKING_DIRECTORY /certificate.pem",
"ceph mgr module enable prometheus",
"ceph orch redeploy prometheus",
"cd /var/lib/ceph/ DAEMON_PATH /",
"cd /var/lib/ceph/monitoring/",
"touch monitoring.yml",
"service_type: prometheus service_name: prometheus placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: node-exporter --- service_type: alertmanager service_name: alertmanager placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: grafana service_name: grafana placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: ceph-exporter",
"ceph orch apply -i monitoring.yml",
"ceph orch ls",
"ceph orch ps --service_name= SERVICE_NAME",
"ceph orch ps --service_name=prometheus",
"cephadm shell",
"ceph orch rm SERVICE_NAME --force",
"ceph orch rm grafana ceph orch rm prometheus ceph orch rm node-exporter ceph orch rm ceph-exporter ceph orch rm alertmanager ceph mgr module disable prometheus",
"ceph orch status",
"ceph orch ls",
"ceph orch ps",
"ceph orch ps"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/operations_guide/management-of-monitoring-stack-using-the-ceph-orchestrator |
2.3. Starting the Piranha Configuration Tool Service | 2.3. Starting the Piranha Configuration Tool Service After you have set the password for the Piranha Configuration Tool , start or restart the piranha-gui service located in /etc/rc.d/init.d/piranha-gui . To do this, type the following command as root: /sbin/service piranha-gui start or /sbin/service piranha-gui restart Issuing this command starts a private session of the Apache HTTP Server by calling the symbolic link /usr/sbin/piranha_gui -> /usr/sbin/httpd . For security reasons, the piranha-gui version of httpd runs as the piranha user in a separate process. The fact that piranha-gui leverages the httpd service means that: The Apache HTTP Server must be installed on the system. Stopping or restarting the Apache HTTP Server by means of the service command stops the piranha-gui service. Warning If the command /sbin/service httpd stop or /sbin/service httpd restart is issued on an LVS router, you must start the piranha-gui service by issuing the following command: /sbin/service piranha-gui start The piranha-gui service is all that is necessary to begin configuring Load Balancer Add-On. However, if you are configuring Load Balancer Add-On remotely, the sshd service is also required. You do not need to start the pulse service until configuration using the Piranha Configuration Tool is complete. See Section 4.8, "Starting the Load Balancer Add-On" for information on starting the pulse service. 2.3.1. Configuring the Piranha Configuration Tool Web Server Port The Piranha Configuration Tool runs on port 3636 by default. To change this port number, change the line Listen 3636 in Section 2 of the piranha-gui Web server configuration file /etc/sysconfig/ha/conf/httpd.conf . To use the Piranha Configuration Tool you need at minimum a text-only Web browser. If you start a Web browser on the primary LVS router, open the location http:// localhost :3636 . You can reach the Piranha Configuration Tool from anywhere by means of a Web browser by replacing localhost with the host name or IP address of the primary LVS router. When your browser connects to the Piranha Configuration Tool , you must login to access the configuration services. Enter piranha in the Username field and the password set with piranha-passwd in the Password field. Now that the Piranha Configuration Tool is running, you may wish to consider limiting who has access to the tool over the network. The section reviews ways to accomplish this task. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-piranha-service-VSA |
Chapter 14. Pruning objects to reclaim resources | Chapter 14. Pruning objects to reclaim resources Over time, API objects created in OpenShift Container Platform can accumulate in the cluster's etcd data store through normal user operations, such as when building and deploying applications. Cluster administrators can periodically prune older versions of objects from the cluster that are no longer required. For example, by pruning images you can delete older images and layers that are no longer in use, but are still taking up disk space. 14.1. Basic pruning operations The CLI groups prune operations under a common parent command: USD oc adm prune <object_type> <options> This specifies: The <object_type> to perform the action on, such as groups , builds , deployments , or images . The <options> supported to prune that object type. 14.2. Pruning groups To prune groups records from an external provider, administrators can run the following command: USD oc adm prune groups \ --sync-config=path/to/sync/config [<options>] Table 14.1. oc adm prune groups flags Options Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --blacklist Path to the group blacklist file. --whitelist Path to the group whitelist file. --sync-config Path to the synchronization configuration file. Procedure To see the groups that the prune command deletes, run the following command: USD oc adm prune groups --sync-config=ldap-sync-config.yaml To perform the prune operation, add the --confirm flag: USD oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm 14.3. Pruning deployment resources You can prune resources associated with deployments that are no longer required by the system, due to age and status. The following command prunes replication controllers associated with DeploymentConfig objects: USD oc adm prune deployments [<options>] Note To also prune replica sets associated with Deployment objects, use the --replica-sets flag. This flag is currently a Technology Preview feature. Table 14.2. oc adm prune deployments flags Option Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --keep-complete=<N> Per the DeploymentConfig object, keep the last N replication controllers that have a status of Complete and replica count of zero. The default is 5 . --keep-failed=<N> Per the DeploymentConfig object, keep the last N replication controllers that have a status of Failed and replica count of zero. The default is 1 . --keep-younger-than=<duration> Do not prune any replication controller that is younger than <duration> relative to the current time. Valid units of measurement include nanoseconds ( ns ), microseconds ( us ), milliseconds ( ms ), seconds ( s ), minutes ( m ), and hours ( h ). The default is 60m . --orphans Prune all replication controllers that no longer have a DeploymentConfig object, has status of Complete or Failed , and has a replica count of zero. --replica-sets=true|false If true , replica sets are included in the pruning process. The default is false . Important This flag is a Technology Preview feature. Procedure To see what a pruning operation would delete, run the following command: USD oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m To actually perform the prune operation, add the --confirm flag: USD oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm 14.4. Pruning builds To prune builds that are no longer required by the system due to age and status, administrators can run the following command: USD oc adm prune builds [<options>] Table 14.3. oc adm prune builds flags Option Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --orphans Prune all builds whose build configuration no longer exists, status is complete, failed, error, or canceled. --keep-complete=<N> Per build configuration, keep the last N builds whose status is complete. The default is 5 . --keep-failed=<N> Per build configuration, keep the last N builds whose status is failed, error, or canceled. The default is 1 . --keep-younger-than=<duration> Do not prune any object that is younger than <duration> relative to the current time. The default is 60m . Procedure To see what a pruning operation would delete, run the following command: USD oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m To actually perform the prune operation, add the --confirm flag: USD oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm Note Developers can enable automatic build pruning by modifying their build configuration. Additional resources Performing advanced builds Pruning builds 14.5. Automatically pruning images Images from the OpenShift image registry that are no longer required by the system due to age, status, or exceed limits are automatically pruned. Cluster administrators can configure the Pruning Custom Resource, or suspend it. Prerequisites Cluster administrator permissions. Install the oc CLI. Procedure Verify that the object named imagepruners.imageregistry.operator.openshift.io/cluster contains the following spec and status fields: spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: "True" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: "Periodic image pruner has been created." - type: Scheduled status: "True" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: "Image pruner job has been scheduled." - type: Failed staus: "False" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: "Most recent image pruning job succeeded." 1 schedule : CronJob formatted schedule. This is an optional field, default is daily at midnight. 2 suspend : If set to true , the CronJob running pruning is suspended. This is an optional field, default is false . The initial value on new clusters is false . 3 keepTagRevisions : The number of revisions per tag to keep. This is an optional field, default is 3 . The initial value is 3 . 4 keepYoungerThanDuration : Retain images younger than this duration. This is an optional field. If a value is not specified, either keepYoungerThan or the default value 60m (60 minutes) is used. 5 keepYoungerThan : Deprecated. The same as keepYoungerThanDuration , but the duration is specified as an integer in nanoseconds. This is an optional field. When keepYoungerThanDuration is set, this field is ignored. 6 resources : Standard pod resource requests and limits. This is an optional field. 7 affinity : Standard pod affinity. This is an optional field. 8 nodeSelector : Standard pod node selector. This is an optional field. 9 tolerations : Standard pod tolerations. This is an optional field. 10 successfulJobsHistoryLimit : The maximum number of successful jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . 11 failedJobsHistoryLimit : The maximum number of failed jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . 12 observedGeneration : The generation observed by the Operator. 13 conditions : The standard condition objects with the following types: Available : Indicates if the pruning job has been created. Reasons can be Ready or Error. Scheduled : Indicates if the pruning job has been scheduled. Reasons can be Scheduled, Suspended, or Error. Failed : Indicates if the most recent pruning job failed. Important The Image Registry Operator's behavior for managing the pruner is orthogonal to the managementState specified on the Image Registry Operator's ClusterOperator object. If the Image Registry Operator is not in the Managed state, the image pruner can still be configured and managed by the Pruning Custom Resource. However, the managementState of the Image Registry Operator alters the behavior of the deployed image pruner job: Managed : the --prune-registry flag for the image pruner is set to true . Removed : the --prune-registry flag for the image pruner is set to false , meaning it only prunes image metatdata in etcd. 14.6. Manually pruning images The pruning custom resource enables automatic image pruning for the images from the OpenShift image registry. However, administrators can manually prune images that are no longer required by the system due to age, status, or exceed limits. There are two methods to manually prune images: Running image pruning as a Job or CronJob on the cluster. Running the oc adm prune images command. Prerequisites To prune images, you must first log in to the CLI as a user with an access token. The user must also have the system:image-pruner cluster role or greater (for example, cluster-admin ). Expose the image registry. Procedure To manually prune images that are no longer required by the system due to age, status, or exceed limits, use one of the following methods: Run image pruning as a Job or CronJob on the cluster by creating a YAML file for the pruner service account, for example: USD oc create -f <filename>.yaml Example output kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: "0 0 * * *" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: "quay.io/openshift/origin-cli:4.1" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner Run the oc adm prune images [<options>] command: USD oc adm prune images [<options>] Pruning images removes data from the integrated registry unless --prune-registry=false is used. Pruning images with the --namespace flag does not remove images, only image streams. Images are non-namespaced resources. Therefore, limiting pruning to a particular namespace makes it impossible to calculate its current usage. By default, the integrated registry caches metadata of blobs to reduce the number of requests to storage, and to increase the request-processing speed. Pruning does not update the integrated registry cache. Images that still contain pruned layers after pruning will be broken because the pruned layers that have metadata in the cache will not be pushed. Therefore, you must redeploy the registry to clear the cache after pruning: USD oc rollout restart deployment/image-registry -n openshift-image-registry If the integrated registry uses a Redis cache, you must clean the database manually. If redeploying the registry after pruning is not an option, then you must permanently disable the cache. oc adm prune images operations require a route for your registry. Registry routes are not created by default. The Prune images CLI configuration options table describes the options you can use with the oc adm prune images <options> command. Table 14.4. Prune images CLI configuration options Option Description --all Include images that were not pushed to the registry, but have been mirrored by pullthrough. This is on by default. To limit the pruning to images that were pushed to the integrated registry, pass --all=false . --certificate-authority The path to a certificate authority file to use when communicating with the OpenShift Container Platform-managed registries. Defaults to the certificate authority data from the current user's configuration file. If provided, a secure connection is initiated. --confirm Indicate that pruning should occur, instead of performing a test-run. This requires a valid route to the integrated container image registry. If this command is run outside of the cluster network, the route must be provided using --registry-url . --force-insecure Use caution with this option. Allow an insecure connection to the container registry that is hosted via HTTP or has an invalid HTTPS certificate. --keep-tag-revisions=<N> For each imagestream, keep up to at most N image revisions per tag (default 3 ). --keep-younger-than=<duration> Do not prune any image that is younger than <duration> relative to the current time. Alternately, do not prune any image that is referenced by any other object that is younger than <duration> relative to the current time (default 60m ). --prune-over-size-limit Prune each image that exceeds the smallest limit defined in the same project. This flag cannot be combined with --keep-tag-revisions nor --keep-younger-than . --registry-url The address to use when contacting the registry. The command attempts to use a cluster-internal URL determined from managed images and image streams. In case it fails (the registry cannot be resolved or reached), an alternative route that works needs to be provided using this flag. The registry hostname can be prefixed by https:// or http:// , which enforces particular connection protocol. --prune-registry In conjunction with the conditions stipulated by the other options, this option controls whether the data in the registry corresponding to the OpenShift Container Platform image API object is pruned. By default, image pruning processes both the image API objects and corresponding data in the registry. This option is useful when you are only concerned with removing etcd content, to reduce the number of image objects but are not concerned with cleaning up registry storage, or if you intend to do that separately by hard pruning the registry during an appropriate maintenance window for the registry. 14.6.1. Image prune conditions You can apply conditions to your manually pruned images. To remove any image managed by OpenShift Container Platform, or images with the annotation openshift.io/image.managed : Created at least --keep-younger-than minutes ago and are not currently referenced by any: Pods created less than --keep-younger-than minutes ago Image streams created less than --keep-younger-than minutes ago Running pods Pending pods Replication controllers Deployments Deployment configs Replica sets Build configurations Builds --keep-tag-revisions most recent items in stream.status.tags[].items That are exceeding the smallest limit defined in the same project and are not currently referenced by any: Running pods Pending pods Replication controllers Deployments Deployment configs Replica sets Build configurations Builds There is no support for pruning from external registries. When an image is pruned, all references to the image are removed from all image streams that have a reference to the image in status.tags . Image layers that are no longer referenced by any images are removed. Note The --prune-over-size-limit flag cannot be combined with the --keep-tag-revisions flag nor the --keep-younger-than flags. Doing so returns information that this operation is not allowed. Separating the removal of OpenShift Container Platform image API objects and image data from the registry by using --prune-registry=false , followed by hard pruning the registry, can narrow timing windows and is safer when compared to trying to prune both through one command. However, timing windows are not completely removed. For example, you can still create a pod referencing an image as pruning identifies that image for pruning. You should still keep track of an API object created during the pruning operations that might reference images so that you can mitigate any references to deleted content. Re-doing the pruning without the --prune-registry option or with --prune-registry=true does not lead to pruning the associated storage in the image registry for images previously pruned by --prune-registry=false . Any images that were pruned with --prune-registry=false can only be deleted from registry storage by hard pruning the registry. 14.6.2. Running the image prune operation Procedure To see what a pruning operation would delete: Keeping up to three tag revisions, and keeping resources (images, image streams, and pods) younger than 60 minutes: USD oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m Pruning every image that exceeds defined limits: USD oc adm prune images --prune-over-size-limit To perform the prune operation with the options from the step: USD oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm USD oc adm prune images --prune-over-size-limit --confirm 14.6.3. Using secure or insecure connections The secure connection is the preferred and recommended approach. It is done over HTTPS protocol with a mandatory certificate verification. The prune command always attempts to use it if possible. If it is not possible, in some cases it can fall-back to insecure connection, which is dangerous. In this case, either certificate verification is skipped or plain HTTP protocol is used. The fall-back to insecure connection is allowed in the following cases unless --certificate-authority is specified: The prune command is run with the --force-insecure option. The provided registry-url is prefixed with the http:// scheme. The provided registry-url is a local-link address or localhost . The configuration of the current user allows for an insecure connection. This can be caused by the user either logging in using --insecure-skip-tls-verify or choosing the insecure connection when prompted. Important If the registry is secured by a certificate authority different from the one used by OpenShift Container Platform, it must be specified using the --certificate-authority flag. Otherwise, the prune command fails with an error. 14.6.4. Image pruning problems Images not being pruned If your images keep accumulating and the prune command removes just a small portion of what you expect, ensure that you understand the image prune conditions that must apply for an image to be considered a candidate for pruning. Ensure that images you want removed occur at higher positions in each tag history than your chosen tag revisions threshold. For example, consider an old and obsolete image named sha256:abz . By running the following command in your namespace, where the image is tagged, the image is tagged three times in a single image stream named myapp : USD oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}'\ '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image "sha256:<hash>"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\n'\ '{{end}}{{end}}{{end}}{{end}}' Example output myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1 When default options are used, the image is never pruned because it occurs at position 0 in a history of myapp:v2.1-may-2016 tag. For an image to be considered for pruning, the administrator must either: Specify --keep-tag-revisions=0 with the oc adm prune images command. Warning This action removes all the tags from all the namespaces with underlying images, unless they are younger or they are referenced by objects younger than the specified threshold. Delete all the istags where the position is below the revision threshold, which means myapp:v2.1 and myapp:v2.1-may-2016 . Move the image further in the history, either by running new builds pushing to the same istag , or by tagging other image. This is not always desirable for old release tags. Tags having a date or time of a particular image's build in their names should be avoided, unless the image must be preserved for an undefined amount of time. Such tags tend to have just one image in their history, which prevents them from ever being pruned. Using a secure connection against insecure registry If you see a message similar to the following in the output of the oc adm prune images command, then your registry is not secured and the oc adm prune images client attempts to use a secure connection: error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client The recommended solution is to secure the registry. Otherwise, you can force the client to use an insecure connection by appending --force-insecure to the command; however, this is not recommended. Using an insecure connection against a secured registry If you see one of the following errors in the output of the oc adm prune images command, it means that your registry is secured using a certificate signed by a certificate authority other than the one used by oc adm prune images client for connection verification: error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response "\x15\x03\x01\x00\x02\x02" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response "\x15\x03\x01\x00\x02\x02"] By default, the certificate authority data stored in the user's configuration files is used; the same is true for communication with the master API. Use the --certificate-authority option to provide the right certificate authority for the container image registry server. Using the wrong certificate authority The following error means that the certificate authority used to sign the certificate of the secured container image registry is different from the authority used by the client: error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority Make sure to provide the right one with the flag --certificate-authority . As a workaround, the --force-insecure flag can be added instead. However, this is not recommended. Additional resources Accessing the registry Exposing the registry See Image Registry Operator in OpenShift Container Platform for information on how to create a registry route. 14.7. Hard pruning the registry The OpenShift Container Registry can accumulate blobs that are not referenced by the OpenShift Container Platform cluster's etcd. The basic pruning images procedure, therefore, is unable to operate on them. These are called orphaned blobs . Orphaned blobs can occur from the following scenarios: Manually deleting an image with oc delete image <sha256:image-id> command, which only removes the image from etcd, but not from the registry's storage. Pushing to the registry initiated by daemon failures, which causes some blobs to get uploaded, but the image manifest (which is uploaded as the very last component) does not. All unique image blobs become orphans. OpenShift Container Platform refusing an image because of quota restrictions. The standard image pruner deleting an image manifest, but is interrupted before it deletes the related blobs. A bug in the registry pruner, which fails to remove the intended blobs, causing the image objects referencing them to be removed and the blobs becoming orphans. Hard pruning the registry, a separate procedure from basic image pruning, allows cluster administrators to remove orphaned blobs. You should hard prune if you are running out of storage space in your OpenShift Container Registry and believe you have orphaned blobs. This should be an infrequent operation and is necessary only when you have evidence that significant numbers of new orphans have been created. Otherwise, you can perform standard image pruning at regular intervals, for example, once a day (depending on the number of images being created). Procedure To hard prune orphaned blobs from the registry: Log in. Log in to the cluster with the CLI as kubeadmin or another privileged user that has access to the openshift-image-registry namespace. Run a basic image prune . Basic image pruning removes additional images that are no longer needed. The hard prune does not remove images on its own. It only removes blobs stored in the registry storage. Therefore, you should run this just before the hard prune. Switch the registry to read-only mode. If the registry is not running in read-only mode, any pushes happening at the same time as the prune will either: fail and cause new orphans, or succeed although the images cannot be pulled (because some of the referenced blobs were deleted). Pushes will not succeed until the registry is switched back to read-write mode. Therefore, the hard prune must be carefully scheduled. To switch the registry to read-only mode: In configs.imageregistry.operator.openshift.io/cluster , set spec.readOnly to true : USD oc patch configs.imageregistry.operator.openshift.io/cluster -p '{"spec":{"readOnly":true}}' --type=merge Add the system:image-pruner role. The service account used to run the registry instances requires additional permissions to list some resources. Get the service account name: USD service_account=USD(oc get -n openshift-image-registry \ -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry) Add the system:image-pruner cluster role to the service account: USD oc adm policy add-cluster-role-to-user \ system:image-pruner -z \ USD{service_account} -n openshift-image-registry Optional: Run the pruner in dry-run mode. To see how many blobs would be removed, run the hard pruner in dry-run mode. No changes are actually made. The following example references an image registry pod called image-registry-3-vhndw : USD oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check' Alternatively, to get the exact paths for the prune candidates, increase the logging level: USD oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check' Example output time="2017-06-22T11:50:25.066156047Z" level=info msg="start prune (dry-run mode)" distribution_version="v2.4.1+unknown" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time="2017-06-22T11:50:25.092257421Z" level=info msg="Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:25.092395621Z" level=info msg="Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:25.092492183Z" level=info msg="Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:26.673946639Z" level=info msg="Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:26.674024531Z" level=info msg="Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:26.674675469Z" level=info msg="Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 ... Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data Run the hard prune. Execute the following command inside one running instance of a image-registry pod to run the hard prune. The following example references an image registry pod called image-registry-3-vhndw : USD oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete' Example output Deleted 13374 blobs Freed up 2.835 GiB of disk space Switch the registry back to read-write mode. After the prune is finished, the registry can be switched back to read-write mode. In configs.imageregistry.operator.openshift.io/cluster , set spec.readOnly to false : USD oc patch configs.imageregistry.operator.openshift.io/cluster -p '{"spec":{"readOnly":false}}' --type=merge 14.8. Pruning cron jobs Cron jobs can perform pruning of successful jobs, but might not properly handle failed jobs. Therefore, the cluster administrator should perform regular cleanup of jobs manually. They should also restrict the access to cron jobs to a small group of trusted users and set appropriate quota to prevent the cron job from creating too many jobs and pods. Additional resources Running tasks in pods using jobs Resource quotas across multiple projects Using RBAC to define and apply permissions | [
"oc adm prune <object_type> <options>",
"oc adm prune groups --sync-config=path/to/sync/config [<options>]",
"oc adm prune groups --sync-config=ldap-sync-config.yaml",
"oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm",
"oc adm prune deployments [<options>]",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"oc adm prune builds [<options>]",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"",
"oc create -f <filename>.yaml",
"kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner",
"oc adm prune images [<options>]",
"oc rollout restart deployment/image-registry -n openshift-image-registry",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m",
"oc adm prune images --prune-over-size-limit",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm",
"oc adm prune images --prune-over-size-limit --confirm",
"oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"sha256:<hash>\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'",
"myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1",
"error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client",
"error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]",
"error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge",
"service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)",
"oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'",
"time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'",
"Deleted 13374 blobs Freed up 2.835 GiB of disk space",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/building_applications/pruning-objects |
Chapter 6. Synchronizing Active Directory and Identity Management Users | Chapter 6. Synchronizing Active Directory and Identity Management Users This chapter describes synchronization between Active Directory and Red Hat Enterprise Linux Identity Management. Synchronization is one of the two methods for indirect integration of the two environments. For details on the cross-forest trust, which is the other, recommended method, see Chapter 5, Creating Cross-forest Trusts with Active Directory and Identity Management . If you are unsure which method to choose for your environment, read Section 1.3, "Indirect Integration" . Identity Management uses synchronization to combine the user data stored in an Active Directory domain and the user data stored in the IdM domain. Critical user attributes, including passwords, are copied and synchronized between the services. Entry synchronization is performed through a process similar to replication, which uses hooks to connect to and retrieve directory data from the Windows server. Password synchronization is performed through a Windows service which is installed on the Windows server and then communicates to the Identity Management server. 6.1. Supported Windows Platforms Synchronization is supported with Active Directory forests that use the following forest and domain functional levels: Forest functional level range: Windows Server 2008 - Windows Server 2012 R2 Domain functional level range: Windows Server 2008 - Windows Server 2012 R2 The following operating systems are explicitly supported and tested for synchronization using the mentioned functional levels: Windows Server 2012 R2 Windows Server 2016 PassSync 1.1.5 or later is compatible with all supported Windows Server versions. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/active-directory |
Chapter 88. volume | Chapter 88. volume This chapter describes the commands under the volume command. 88.1. volume backup create Create new volume backup Usage: Table 88.1. Positional Arguments Value Summary <volume> Volume to backup (name or id) Table 88.2. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the backup --description <description> Description of the backup --container <container> Optional backup container name --snapshot <snapshot> Snapshot to backup (name or id) --force Allow to back up an in-use volume --incremental Perform an incremental backup Table 88.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.2. volume backup delete Delete volume backup(s) Usage: Table 88.7. Positional Arguments Value Summary <backup> Backup(s) to delete (name or id) Table 88.8. Optional Arguments Value Summary -h, --help Show this help message and exit --force Allow delete in state other than error or available 88.3. volume backup list List volume backups Usage: Table 88.9. Optional Arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --name <name> Filters results by the backup name --status <status> Filters results by the backup status ( creating , available , deleting , error , restoring or error_restoring ) --volume <volume> Filters results by the volume which they backup (name or ID) --marker <volume-backup> The last backup of the page (name or id) --limit <num-backups> Maximum number of backups to display --all-projects Include all projects (admin only) Table 88.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.4. volume backup restore Restore volume backup Usage: Table 88.14. Positional Arguments Value Summary <backup> Backup to restore (name or id) <volume> Volume to restore to (name or id) Table 88.15. Optional Arguments Value Summary -h, --help Show this help message and exit Table 88.16. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.17. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.18. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.19. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.5. volume backup set Set volume backup properties Usage: Table 88.20. Positional Arguments Value Summary <backup> Backup to modify (name or id) Table 88.21. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> New backup name --description <description> New backup description --state <state> New backup state ("available" or "error") (admin only) (This option simply changes the state of the backup in the database with no regard to actual status, exercise caution when using) 88.6. volume backup show Display volume backup details Usage: Table 88.22. Positional Arguments Value Summary <backup> Backup to display (name or id) Table 88.23. Optional Arguments Value Summary -h, --help Show this help message and exit Table 88.24. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.25. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.26. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.27. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.7. volume create Create new volume Usage: Table 88.28. Positional Arguments Value Summary <name> Volume name Table 88.29. Optional Arguments Value Summary -h, --help Show this help message and exit --size <size> Volume size in gb (required unless --snapshot or --source is specified) --type <volume-type> Set the type of volume --image <image> Use <image> as source of volume (name or id) --snapshot <snapshot> Use <snapshot> as source of volume (name or id) --source <volume> Volume to clone (name or id) --description <description> Volume description --availability-zone <availability-zone> Create volume in <availability-zone> --consistency-group consistency-group> Consistency group where the new volume belongs to --property <key=value> Set a property to this volume (repeat option to set multiple properties) --hint <key=value> Arbitrary scheduler hint key-value pairs to help boot an instance (repeat option to set multiple hints) --bootable Mark volume as bootable --non-bootable Mark volume as non-bootable (default) --read-only Set volume to read-only access mode --read-write Set volume to read-write access mode (default) Table 88.30. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.31. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.32. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.33. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.8. volume delete Delete volume(s) Usage: Table 88.34. Positional Arguments Value Summary <volume> Volume(s) to delete (name or id) Table 88.35. Optional Arguments Value Summary -h, --help Show this help message and exit --force Attempt forced removal of volume(s), regardless of state (defaults to False) --purge Remove any snapshots along with volume(s) (defaults to false) 88.9. volume host set Set volume host properties Usage: Table 88.36. Positional Arguments Value Summary <host-name> Name of volume host Table 88.37. Optional Arguments Value Summary -h, --help Show this help message and exit --disable Freeze and disable the specified volume host --enable Thaw and enable the specified volume host 88.10. volume list List volumes Usage: Table 88.38. Optional Arguments Value Summary -h, --help Show this help message and exit --project <project> Filter results by project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user <user> Filter results by user (name or id) (admin only) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --name <name> Filter results by volume name --status <status> Filter results by status --all-projects Include all projects (admin only) --long List additional fields in output --marker <volume> The last volume id of the page --limit <num-volumes> Maximum number of volumes to display Table 88.39. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.40. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.41. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.42. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.11. volume migrate Migrate volume to a new host Usage: Table 88.43. Positional Arguments Value Summary <volume> Volume to migrate (name or id) Table 88.44. Optional Arguments Value Summary -h, --help Show this help message and exit --host <host> Destination host (takes the form: host@backend-name#pool) --force-host-copy Enable generic host-based force-migration, which bypasses driver optimizations --lock-volume If specified, the volume state will be locked and will not allow a migration to be aborted (possibly by another operation) 88.12. volume qos associate Associate a QoS specification to a volume type Usage: Table 88.45. Positional Arguments Value Summary <qos-spec> Qos specification to modify (name or id) <volume-type> Volume type to associate the qos (name or id) Table 88.46. Optional Arguments Value Summary -h, --help Show this help message and exit 88.13. volume qos create Create new QoS specification Usage: Table 88.47. Positional Arguments Value Summary <name> New qos specification name Table 88.48. Optional Arguments Value Summary -h, --help Show this help message and exit --consumer <consumer> Consumer of the qos. valid consumers: back-end, both, front-end (defaults to both ) --property <key=value> Set a qos specification property (repeat option to set multiple properties) Table 88.49. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.50. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.51. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.52. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.14. volume qos delete Delete QoS specification Usage: Table 88.53. Positional Arguments Value Summary <qos-spec> Qos specification(s) to delete (name or id) Table 88.54. Optional Arguments Value Summary -h, --help Show this help message and exit --force Allow to delete in-use qos specification(s) 88.15. volume qos disassociate Disassociate a QoS specification from a volume type Usage: Table 88.55. Positional Arguments Value Summary <qos-spec> Qos specification to modify (name or id) Table 88.56. Optional Arguments Value Summary -h, --help Show this help message and exit --volume-type <volume-type> Volume type to disassociate the qos from (name or id) --all Disassociate the qos from every volume type 88.16. volume qos list List QoS specifications Usage: Table 88.57. Optional Arguments Value Summary -h, --help Show this help message and exit Table 88.58. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.59. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.60. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.61. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.17. volume qos set Set QoS specification properties Usage: Table 88.62. Positional Arguments Value Summary <qos-spec> Qos specification to modify (name or id) Table 88.63. Optional Arguments Value Summary -h, --help Show this help message and exit --property <key=value> Property to add or modify for this qos specification (repeat option to set multiple properties) 88.18. volume qos show Display QoS specification details Usage: Table 88.64. Positional Arguments Value Summary <qos-spec> Qos specification to display (name or id) Table 88.65. Optional Arguments Value Summary -h, --help Show this help message and exit Table 88.66. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.67. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.68. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.69. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.19. volume qos unset Unset QoS specification properties Usage: Table 88.70. Positional Arguments Value Summary <qos-spec> Qos specification to modify (name or id) Table 88.71. Optional Arguments Value Summary -h, --help Show this help message and exit --property <key> Property to remove from the qos specification. (repeat option to unset multiple properties) 88.20. volume service list List service command Usage: Table 88.72. Optional Arguments Value Summary -h, --help Show this help message and exit --host <host> List services on specified host (name only) --service <service> List only specified service (name only) --long List additional fields in output Table 88.73. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.74. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.75. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.76. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.21. volume service set Set volume service properties Usage: Table 88.77. Positional Arguments Value Summary <host> Name of host <service> Name of service (binary name) Table 88.78. Optional Arguments Value Summary -h, --help Show this help message and exit --enable Enable volume service --disable Disable volume service --disable-reason <reason> Reason for disabling the service (should be used with --disable option) 88.22. volume set Set volume properties Usage: Table 88.79. Positional Arguments Value Summary <volume> Volume to modify (name or id) Table 88.80. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> New volume name --size <size> Extend volume size in gb --description <description> New volume description --no-property Remove all properties from <volume> (specify both --no-property and --property to remove the current properties before setting new properties.) --property <key=value> Set a property on this volume (repeat option to set multiple properties) --image-property <key=value> Set an image property on this volume (repeat option to set multiple image properties) --state <state> New volume state ("available", "error", "creating", "deleting", "in-use", "attaching", "detaching", "error_deleting" or "maintenance") (admin only) (This option simply changes the state of the volume in the database with no regard to actual status, exercise caution when using) --attached Set volume attachment status to "attached" (admin only) (This option simply changes the state of the volume in the database with no regard to actual status, exercise caution when using) --detached Set volume attachment status to "detached" (admin only) (This option simply changes the state of the volume in the database with no regard to actual status, exercise caution when using) --type <volume-type> New volume type (name or id) --retype-policy <retype-policy> Migration policy while re-typing volume ("never" or "on-demand", default is "never" ) (available only when --type option is specified) --bootable Mark volume as bootable --non-bootable Mark volume as non-bootable --read-only Set volume to read-only access mode --read-write Set volume to read-write access mode 88.23. volume show Display volume details Usage: Table 88.81. Positional Arguments Value Summary <volume> Volume to display (name or id) Table 88.82. Optional Arguments Value Summary -h, --help Show this help message and exit Table 88.83. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.84. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.85. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.86. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.24. volume snapshot create Create new volume snapshot Usage: Table 88.87. Positional Arguments Value Summary <snapshot-name> Name of the new snapshot Table 88.88. Optional Arguments Value Summary -h, --help Show this help message and exit --volume <volume> Volume to snapshot (name or id) (default is <snapshot- name>) --description <description> Description of the snapshot --force Create a snapshot attached to an instance. default is False --property <key=value> Set a property to this snapshot (repeat option to set multiple properties) --remote-source <key=value> The attribute(s) of the exsiting remote volume snapshot (admin required) (repeat option to specify multiple attributes) e.g.: --remote-source source- name=test_name --remote-source source-id=test_id Table 88.89. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.90. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.91. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.92. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.25. volume snapshot delete Delete volume snapshot(s) Usage: Table 88.93. Positional Arguments Value Summary <snapshot> Snapshot(s) to delete (name or id) Table 88.94. Optional Arguments Value Summary -h, --help Show this help message and exit --force Attempt forced removal of snapshot(s), regardless of state (defaults to False) 88.26. volume snapshot list List volume snapshots Usage: Table 88.95. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Include all projects (admin only) --project <project> Filter results by project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --long List additional fields in output --marker <volume-snapshot> The last snapshot id of the page --limit <num-snapshots> Maximum number of snapshots to display --name <name> Filters results by a name. --status <status> Filters results by a status. ( available , error , creating , deleting or error-deleting ) --volume <volume> Filters results by a volume (name or id). Table 88.96. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.97. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.98. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.99. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.27. volume snapshot set Set volume snapshot properties Usage: Table 88.100. Positional Arguments Value Summary <snapshot> Snapshot to modify (name or id) Table 88.101. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> New snapshot name --description <description> New snapshot description --no-property Remove all properties from <snapshot> (specify both --no-property and --property to remove the current properties before setting new properties.) --property <key=value> Property to add/change for this snapshot (repeat option to set multiple properties) --state <state> New snapshot state. ("available", "error", "creating", "deleting", or "error_deleting") (admin only) (This option simply changes the state of the snapshot in the database with no regard to actual status, exercise caution when using) 88.28. volume snapshot show Display volume snapshot details Usage: Table 88.102. Positional Arguments Value Summary <snapshot> Snapshot to display (name or id) Table 88.103. Optional Arguments Value Summary -h, --help Show this help message and exit Table 88.104. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.105. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.106. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.107. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.29. volume snapshot unset Unset volume snapshot properties Usage: Table 88.108. Positional Arguments Value Summary <snapshot> Snapshot to modify (name or id) Table 88.109. Optional Arguments Value Summary -h, --help Show this help message and exit --property <key> Property to remove from snapshot (repeat option to remove multiple properties) 88.30. volume transfer request accept Accept volume transfer request. Usage: Table 88.110. Positional Arguments Value Summary <transfer-request-id> Volume transfer request to accept (id only) Table 88.111. Optional Arguments Value Summary -h, --help Show this help message and exit --auth-key <key> Volume transfer request authentication key Table 88.112. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.113. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.114. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.115. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.31. volume transfer request create Create volume transfer request. Usage: Table 88.116. Positional Arguments Value Summary <volume> Volume to transfer (name or id) Table 88.117. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> New transfer request name (default to none) Table 88.118. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.119. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.120. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.121. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.32. volume transfer request delete Delete volume transfer request(s). Usage: Table 88.122. Positional Arguments Value Summary <transfer-request> Volume transfer request(s) to delete (name or id) Table 88.123. Optional Arguments Value Summary -h, --help Show this help message and exit 88.33. volume transfer request list Lists all volume transfer requests. Usage: Table 88.124. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Include all projects (admin only) Table 88.125. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.126. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.127. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.128. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.34. volume transfer request show Show volume transfer request details. Usage: Table 88.129. Positional Arguments Value Summary <transfer-request> Volume transfer request to display (name or id) Table 88.130. Optional Arguments Value Summary -h, --help Show this help message and exit Table 88.131. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.132. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.133. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.134. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.35. volume type create Create new volume type Usage: Table 88.135. Positional Arguments Value Summary <name> Volume type name Table 88.136. Optional Arguments Value Summary -h, --help Show this help message and exit --description <description> Volume type description --public Volume type is accessible to the public --private Volume type is not accessible to the public --property <key=value> Set a property on this volume type (repeat option to set multiple properties) --project <project> Allow <project> to access private type (name or id) (Must be used with --private option) --encryption-provider <provider> Set the encryption provider format for this volume type (e.g "luks" or "plain") (admin only) (This option is required when setting encryption type of a volume. Consider using other encryption options such as: "-- encryption-cipher", "--encryption-key-size" and "-- encryption-control-location") --encryption-cipher <cipher> Set the encryption algorithm or mode for this volume type (e.g "aes-xts-plain64") (admin only) --encryption-key-size <key-size> Set the size of the encryption key of this volume type (e.g "128" or "256") (admin only) --encryption-control-location <control-location> Set the notional service where the encryption is performed ("front-end" or "back-end") (admin only) (The default value for this option is "front-end" when setting encryption type of a volume. Consider using other encryption options such as: "--encryption- cipher", "--encryption-key-size" and "--encryption- provider") --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 88.137. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.138. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.139. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.140. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.36. volume type delete Delete volume type(s) Usage: Table 88.141. Positional Arguments Value Summary <volume-type> Volume type(s) to delete (name or id) Table 88.142. Optional Arguments Value Summary -h, --help Show this help message and exit 88.37. volume type list List volume types Usage: Table 88.143. Optional Arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --default List the default volume type --public List only public types --private List only private types (admin only) --encryption-type Display encryption information for each volume type (admin only) Table 88.144. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 88.145. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.146. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.147. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.38. volume type set Set volume type properties Usage: Table 88.148. Positional Arguments Value Summary <volume-type> Volume type to modify (name or id) Table 88.149. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> Set volume type name --description <description> Set volume type description --property <key=value> Set a property on this volume type (repeat option to set multiple properties) --project <project> Set volume type access to project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --encryption-provider <provider> Set the encryption provider format for this volume type (e.g "luks" or "plain") (admin only) (This option is required when setting encryption type of a volume for the first time. Consider using other encryption options such as: "--encryption-cipher", "--encryption- key-size" and "--encryption-control-location") --encryption-cipher <cipher> Set the encryption algorithm or mode for this volume type (e.g "aes-xts-plain64") (admin only) --encryption-key-size <key-size> Set the size of the encryption key of this volume type (e.g "128" or "256") (admin only) --encryption-control-location <control-location> Set the notional service where the encryption is performed ("front-end" or "back-end") (admin only) (The default value for this option is "front-end" when setting encryption type of a volume for the first time. Consider using other encryption options such as: "--encryption-cipher", "--encryption-key-size" and "-- encryption-provider") 88.39. volume type show Display volume type details Usage: Table 88.150. Positional Arguments Value Summary <volume-type> Volume type to display (name or id) Table 88.151. Optional Arguments Value Summary -h, --help Show this help message and exit --encryption-type Display encryption information of this volume type (admin only) Table 88.152. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 88.153. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 88.154. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 88.155. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 88.40. volume type unset Unset volume type properties Usage: Table 88.156. Positional Arguments Value Summary <volume-type> Volume type to modify (name or id) Table 88.157. Optional Arguments Value Summary -h, --help Show this help message and exit --property <key> Remove a property from this volume type (repeat option to remove multiple properties) --project <project> Removes volume type access to project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --encryption-type Remove the encryption type for this volume type (admin only) 88.41. volume unset Unset volume properties Usage: Table 88.158. Positional Arguments Value Summary <volume> Volume to modify (name or id) Table 88.159. Optional Arguments Value Summary -h, --help Show this help message and exit --property <key> Remove a property from volume (repeat option to remove multiple properties) --image-property <key> Remove an image property from volume (repeat option to remove multiple image properties) | [
"openstack volume backup create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] [--container <container>] [--snapshot <snapshot>] [--force] [--incremental] <volume>",
"openstack volume backup delete [-h] [--force] <backup> [<backup> ...]",
"openstack volume backup list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long] [--name <name>] [--status <status>] [--volume <volume>] [--marker <volume-backup>] [--limit <num-backups>] [--all-projects]",
"openstack volume backup restore [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <backup> <volume>",
"openstack volume backup set [-h] [--name <name>] [--description <description>] [--state <state>] <backup>",
"openstack volume backup show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <backup>",
"openstack volume create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--size <size>] [--type <volume-type>] [--image <image> | --snapshot <snapshot> | --source <volume>] [--description <description>] [--availability-zone <availability-zone>] [--consistency-group consistency-group>] [--property <key=value>] [--hint <key=value>] [--bootable | --non-bootable] [--read-only | --read-write] <name>",
"openstack volume delete [-h] [--force | --purge] <volume> [<volume> ...]",
"openstack volume host set [-h] [--disable | --enable] <host-name>",
"openstack volume list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--project <project>] [--project-domain <project-domain>] [--user <user>] [--user-domain <user-domain>] [--name <name>] [--status <status>] [--all-projects] [--long] [--marker <volume>] [--limit <num-volumes>]",
"openstack volume migrate [-h] --host <host> [--force-host-copy] [--lock-volume] <volume>",
"openstack volume qos associate [-h] <qos-spec> <volume-type>",
"openstack volume qos create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--consumer <consumer>] [--property <key=value>] <name>",
"openstack volume qos delete [-h] [--force] <qos-spec> [<qos-spec> ...]",
"openstack volume qos disassociate [-h] [--volume-type <volume-type> | --all] <qos-spec>",
"openstack volume qos list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]",
"openstack volume qos set [-h] [--property <key=value>] <qos-spec>",
"openstack volume qos show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <qos-spec>",
"openstack volume qos unset [-h] [--property <key>] <qos-spec>",
"openstack volume service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--host <host>] [--service <service>] [--long]",
"openstack volume service set [-h] [--enable | --disable] [--disable-reason <reason>] <host> <service>",
"openstack volume set [-h] [--name <name>] [--size <size>] [--description <description>] [--no-property] [--property <key=value>] [--image-property <key=value>] [--state <state>] [--attached | --detached] [--type <volume-type>] [--retype-policy <retype-policy>] [--bootable | --non-bootable] [--read-only | --read-write] <volume>",
"openstack volume show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <volume>",
"openstack volume snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--volume <volume>] [--description <description>] [--force] [--property <key=value>] [--remote-source <key=value>] <snapshot-name>",
"openstack volume snapshot delete [-h] [--force] <snapshot> [<snapshot> ...]",
"openstack volume snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--project <project>] [--project-domain <project-domain>] [--long] [--marker <volume-snapshot>] [--limit <num-snapshots>] [--name <name>] [--status <status>] [--volume <volume>]",
"openstack volume snapshot set [-h] [--name <name>] [--description <description>] [--no-property] [--property <key=value>] [--state <state>] <snapshot>",
"openstack volume snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <snapshot>",
"openstack volume snapshot unset [-h] [--property <key>] <snapshot>",
"openstack volume transfer request accept [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --auth-key <key> <transfer-request-id>",
"openstack volume transfer request create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] <volume>",
"openstack volume transfer request delete [-h] <transfer-request> [<transfer-request> ...]",
"openstack volume transfer request list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects]",
"openstack volume transfer request show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <transfer-request>",
"openstack volume type create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--public | --private] [--property <key=value>] [--project <project>] [--encryption-provider <provider>] [--encryption-cipher <cipher>] [--encryption-key-size <key-size>] [--encryption-control-location <control-location>] [--project-domain <project-domain>] <name>",
"openstack volume type delete [-h] <volume-type> [<volume-type> ...]",
"openstack volume type list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long] [--default | --public | --private] [--encryption-type]",
"openstack volume type set [-h] [--name <name>] [--description <description>] [--property <key=value>] [--project <project>] [--project-domain <project-domain>] [--encryption-provider <provider>] [--encryption-cipher <cipher>] [--encryption-key-size <key-size>] [--encryption-control-location <control-location>] <volume-type>",
"openstack volume type show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--encryption-type] <volume-type>",
"openstack volume type unset [-h] [--property <key>] [--project <project>] [--project-domain <project-domain>] [--encryption-type] <volume-type>",
"openstack volume unset [-h] [--property <key>] [--image-property <key>] <volume>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/volume |
Chapter 3. Important notes | Chapter 3. Important notes 3.1. End of Red Hat Enterprise Linux 6 support As of version 2.9, AMQ Clients no longer supports Red Hat Enterprise Linux 6. Red Hat Enterprise Linux 6 ended maintenance support on November 30th, 2020. For more information, see Red Hat Enterprise Linux Life Cycle . 3.2. Long term support AMQ Clients 2.9 has been designated as a long term support (LTS) release version. Bug fixes and security advisories will be made available for AMQ Clients 2.9 in a series of micro releases (2.9.1, 2.9.2, 2.9.3, and so on) for a period of at least 12 months. This means that you will be able to get recent bug fixes and security advisories for AMQ Clients without having to upgrade to a new minor release. Note the following important points about the LTS release stream: The LTS release stream provides only bug fixes. No new enhancements will be added to this stream. To remain in a supported configuration, you must upgrade to the latest micro release in the LTS release stream. The LTS version will be supported for at least 12 months from the time of the AMQ Clients 2.9.0 GA. 3.3. AMQ C++ Unsettled interfaces The AMQ C++ messaging API includes classes and methods that are not yet proven and can change in future releases. Be aware that use of these interfaces might require changes to your application code in the future. These interfaces are marked Unsettled API in the API reference. They include the interfaces in the proton::codec and proton::io namespaces and the following interfaces in the proton namespace. listen_handler The on_sender_drain_start and on_sender_drain_finish methods on messaging_handler The draining and return_credit methods on sender The draining and drain methods on receiver API elements present in header files but not yet documented are considered unsettled and are subject to change. Deprecated interfaces Interfaces marked Deprecated in the API reference are scheduled for removal in a future release. This release deprecates the following interfaces in the proton namespace. void_function0 - Use the work class or C++11 lambdas instead. default_container - Use the container class instead. url and url_error - Use a third-party URL library instead. 3.4. Preferred clients In general, AMQ clients that support the AMQP 1.0 standard are preferred for new application development. However, the following exceptions apply: If your implementation requires distributed transactions, use the AMQ Core Protocol JMS client. If you require MQTT or STOMP in your domain (for IoT applications, for instance), use community-supported MQTT or STOMP clients. The considerations above do not necessarily apply if you are already using: The AMQ OpenWire JMS client (the JMS implementation previously provided in A-MQ 6) The AMQ Core Protocol JMS client (the JMS implementation previously provided with HornetQ) 3.5. Legacy clients Deprecation of the CMS and NMS APIs The ActiveMQ CMS and NMS messaging APIs are deprecated in AMQ 7. It is recommended that users of the CMS API migrate to AMQ C++, and users of the NMS API migrate to AMQ .NET. The CMS and NMS APIs might have reduced functionality in AMQ 7. Deprecation of the legacy AMQ C++ client The legacy AMQ C++ client (the C++ client previously provided in MRG Messaging) is deprecated in AMQ 7. It is recommended that users of this API migrate to AMQ C++. The Core API is unsupported The Artemis Core API client is not supported. This client is distinct from the AMQ Core Protocol JMS client, which is supported. 3.6. Upstream versions AMQ C++, AMQ Python, and AMQ Ruby are now based on Qpid Proton 0.33.0 . AMQ JavaScript is now based on Rhea 1.0.24 . AMQ .NET is now based on AMQP.Net Lite 2.4.0 . AMQ JMS is now based on Qpid JMS 0.55.0 . AMQ Core Protocol JMS is now based on ActiveMQ Artemis 2.16.0 . AMQ OpenWire JMS is now based on ActiveMQ 5.11.0 . AMQ JMS Pool is now based on Pooled JMS 1.2.1 . AMQ Resource Adapter is now based on AMQP 1.0 Resource Adapter 1.0.2 . AMQ Spring Boot Starter is now based on AMQP 1.0 JMS Spring Boot 2.3.6 . AMQ Netty OpenSSL is now based on netty-tcnative 2.0.34.Final . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/amq_clients_2.9_release_notes/important_notes |
Part III. Package Lists - Supplementary Channel | Part III. Package Lists - Supplementary Channel This part provides an overview of packages available in the Supplementary channel. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/part-documentation-package_manifest-package_lists-supplementary_channel |
Chapter 15. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure | Chapter 15. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.13, you can install a cluster on Amazon Web Services (AWS) using infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the AWS APIs. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 15.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 15.2. About installations in restricted networks In OpenShift Container Platform 4.13, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 15.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 15.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 15.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 15.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 15.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 15.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 15.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 15.4.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 15.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 15.4.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 15.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 15.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 15.5. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 15.5.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 15.5.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 15.5.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 15.3. Required EC2 permissions for installation ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:AttachNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 15.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing VPC, your account does not require these permissions for creating network resources. Example 15.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:SetLoadBalancerPoliciesOfListener Example 15.6. Required Elastic Load Balancing permissions (ELBv2) for installation elasticloadbalancing:AddTags elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterTargets Example 15.7. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagRole Note If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 15.8. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 15.9. Required S3 permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketPolicy s3:GetBucketObjectLockConfiguration s3:GetBucketReplication s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketTagging s3:PutEncryptionConfiguration Example 15.10. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 15.11. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeletePlacementGroup ec2:DeleteNetworkInterface ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 15.12. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 15.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 15.14. Additional IAM and S3 permissions that are required to create manifests iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:PutBucketPublicAccessBlock s3:GetBucketPublicAccessBlock s3:PutLifecycleConfiguration s3:ListBucket s3:ListBucketMultipartUploads s3:AbortMultipartUpload Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 15.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas 15.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 15.7. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 15.7.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 15.7.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- Add the image content resources: imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Use the imageContentSources section from the output of the command to mirror the repository or the values that you used when you mirrored the content from the media that you brought into your restricted network. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 15.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.7.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. Optional: If you manually created a cloud identity and access management (IAM) role, locate any CredentialsRequest objects with the TechPreviewNoUpgrade annotation in the release image by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name> Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-set: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. Delete all CredentialsRequest objects that have the TechPreviewNoUpgrade annotation. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Manually creating IAM 15.8. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 15.9. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 15.9.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 15.16. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable 15.10. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 15.10.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 15.17. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.8" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.8" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 15.11. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 15.11.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 15.18. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile 15.12. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 15.13. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 15.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-052b3e6b060b5595d ap-east-1 ami-09c502968481ee218 ap-northeast-1 ami-06b1dbe049e3c1d23 ap-northeast-2 ami-08add6eb5aa1c8639 ap-northeast-3 ami-0af4dfc64506fe20e ap-south-1 ami-09b1532dd3d63fdc0 ap-south-2 ami-0a915cedf8558e600 ap-southeast-1 ami-0c914fd7a50130c9e ap-southeast-2 ami-04b54199f4be0ec9d ap-southeast-3 ami-0be3ee78b9a3fdf07 ap-southeast-4 ami-00a44d7d5054bb5f8 ca-central-1 ami-0bb1fd49820ea09ae eu-central-1 ami-03d9cb166a11c9b8a eu-central-2 ami-089865c640f876630 eu-north-1 ami-0e94d896e72eeae0d eu-south-1 ami-04df4e2850dce0721 eu-south-2 ami-0d80de3a5ba722545 eu-west-1 ami-066f2d86026ef97a8 eu-west-2 ami-0f1c0b26b1c99499d eu-west-3 ami-0f639505a9c74d9a2 me-central-1 ami-0fbb2ece8478f1402 me-south-1 ami-01507551558853852 sa-east-1 ami-097132aa0da53c981 us-east-1 ami-0624891c612b5eaa0 us-east-2 ami-0dc6c4d1bd5161f13 us-gov-east-1 ami-0bab20368b3b9b861 us-gov-west-1 ami-0fe8299f8e808e720 us-west-1 ami-0c03b7e5954f10f9b us-west-2 ami-0f4cdfd74e4a3fc29 Table 15.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-0d684ca7c09e6f5fc ap-east-1 ami-01b0e1c24d180fe5d ap-northeast-1 ami-06439c626e2663888 ap-northeast-2 ami-0a19d3bed3a2854e3 ap-northeast-3 ami-08b8fa76fd46b5c58 ap-south-1 ami-0ec6463b788929a6a ap-south-2 ami-0f5077b6d7e1b10a5 ap-southeast-1 ami-081a6c6a24e2ee453 ap-southeast-2 ami-0a70049ac02157a02 ap-southeast-3 ami-065fd6311a9d7e6a6 ap-southeast-4 ami-0105993dc2508c4f4 ca-central-1 ami-04582d73d5aad9a85 eu-central-1 ami-0f72c8b59213f628e eu-central-2 ami-0647f43516c31119c eu-north-1 ami-0d155ca6a531f5f72 eu-south-1 ami-02f8d2794a663dbd0 eu-south-2 ami-0427659985f520cae eu-west-1 ami-04e9944a8f9761c3e eu-west-2 ami-09c701f11d9a7b167 eu-west-3 ami-02cd8181243610e0d me-central-1 ami-03008d03f133e6ec0 me-south-1 ami-096bc3b4ec0faad76 sa-east-1 ami-01f9b5a4f7b8c50a1 us-east-1 ami-09ea6f8f7845792e1 us-east-2 ami-039cdb2bf3b5178da us-gov-east-1 ami-0fed54a5ab75baed0 us-gov-west-1 ami-0fc5be5af4bb1d79f us-west-1 ami-018e5407337da1062 us-west-2 ami-0c0c67ef81b80e8eb 15.14. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 15.14.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 15.19. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 15.15. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 15.15.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 15.20. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] 15.16. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 15.16.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 15.21. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp 15.17. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. 15.18. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.19. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 15.20. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. 15.20.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 15.20.2. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 15.20.2.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 15.20.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 15.21. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 15.22. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 15.23. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Register your cluster on the Cluster registration page. 15.24. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 15.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.26. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 15.27. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster If necessary, you can remove cloud provider credentials . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name>",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_aws/installing-restricted-networks-aws |
15.2. Migration Requirements and Limitations | 15.2. Migration Requirements and Limitations Before using KVM migration, make sure that your system fulfills the migration's requirements, and that you are aware of its limitations. Migration requirements A guest virtual machine installed on shared storage using one of the following protocols: Fibre Channel-based LUNs iSCSI NFS GFS2 SCSI RDMA protocols (SCSI RCP): the block export protocol used in Infiniband and 10GbE iWARP adapters Make sure that the libvirtd service is enabled and running. The ability to migrate effectively is dependant on the parameter setting in the /etc/libvirt/libvirtd.conf file. To edit this file, use the following procedure: Procedure 15.1. Configuring libvirtd.conf Opening the libvirtd.conf requires running the command as root: Change the parameters as needed and save the file. Restart the libvirtd service: The migration platforms and versions should be checked against Table 15.1, "Live Migration Compatibility" Use a separate system exporting the shared storage medium. Storage should not reside on either of the two host physical machines used for the migration. Shared storage must mount at the same location on source and destination systems. The mounted directory names must be identical. Although it is possible to keep the images using different paths, it is not recommended. Note that, if you intend to use virt-manager to perform the migration, the path names must be identical. If you intend to use virsh to perform the migration, different network configurations and mount directories can be used with the help of --xml option or pre-hooks . For more information on pre-hooks, see the libvirt upstream documentation , and for more information on the XML option, see Chapter 23, Manipulating the Domain XML . When migration is attempted on an existing guest virtual machine in a public bridge+tap network, the source and destination host machines must be located on the same network. Otherwise, the guest virtual machine network will not operate after migration. Migration Limitations Guest virtual machine migration has the following limitations when used on Red Hat Enterprise Linux with virtualization technology based on KVM: Point to point migration - must be done manually to designate destination hypervisor from originating hypervisor No validation or roll-back is available Determination of target may only be done manually Storage migration cannot be performed live on Red Hat Enterprise Linux 7 , but you can migrate storage while the guest virtual machine is powered down. Live storage migration is available on Red Hat Virtualization . Call your service representative for details. Note If you are migrating a guest machine that has virtio devices on it, make sure to set the number of vectors on any virtio device on either platform to 32 or fewer. For detailed information, see Section 23.17, "Devices" . | [
"systemctl enable libvirtd.service systemctl restart libvirtd.service",
"vim /etc/libvirt/libvirtd.conf",
"systemctl restart libvirtd"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-KVM_live_migration-Live_migration_requirements |
Eclipse Temurin 8.0.412 release notes | Eclipse Temurin 8.0.412 release notes Red Hat build of OpenJDK 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.412_release_notes/index |
Chapter 1. About Tooling guide | Chapter 1. About Tooling guide This guide introduces VS Code extensions for Red Hat build of Apache Camel and how to install and use Camel CLI. Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . VS Code extensions for Red Hat build of Apache Camel. Following VS Code extensions are explained in this guide. Language Support for Apache Camel The Language Support for Apache Camel extension adds the language support for Apache Camel for Java, Yaml, or XML DSL code. Debug Adapter for Apache Camel The Debug Adapter for Apache Camel adds Camel Debugger power by attaching to a running Camel route written in Java, Yaml, or XML DSL. Camel CLI This is a JBang based Camel application that you can use for running your Camel routes. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/tooling_guide/introduction_tooling_guide |
Chapter 17. Using the Red Hat Marketplace | Chapter 17. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace that makes it easy to discover and access certified software for container-based environments that run on public clouds and on-premises. 17.1. Red Hat Marketplace features Cluster administrators can use the Red Hat Marketplace to manage software on OpenShift Container Platform, give developers self-service access to deploy application instances, and correlate application usage against a quota. 17.1.1. Connect OpenShift Container Platform clusters to the Marketplace Cluster administrators can install a common set of applications on OpenShift Container Platform clusters that connect to the Marketplace. They can also use the Marketplace to track cluster usage against subscriptions or quotas. Users that they add by using the Marketplace have their product usage tracked and billed to their organization. During the cluster connection process , a Marketplace Operator is installed that updates the image registry secret, manages the catalog, and reports application usage. 17.1.2. Install applications Cluster administrators can install Marketplace applications from within OperatorHub in OpenShift Container Platform, or from the Marketplace web application . You can access installed applications from the web console by clicking Operators > Installed Operators . 17.1.3. Deploy applications from different perspectives You can deploy Marketplace applications from the web console's Administrator and Developer perspectives. The Developer perspective Developers can access newly installed capabilities by using the Developer perspective. For example, after a database Operator is installed, a developer can create an instance from the catalog within their project. Database usage is aggregated and reported to the cluster administrator. This perspective does not include Operator installation and application usage tracking. The Administrator perspective Cluster administrators can access Operator installation and application usage information from the Administrator perspective. They can also launch application instances by browsing custom resource definitions (CRDs) in the Installed Operators list. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/red-hat-marketplace |
Deploying OpenShift Data Foundation using IBM Power | Deploying OpenShift Data Foundation using IBM Power Red Hat OpenShift Data Foundation 4.17 Instructions on deploying Red Hat OpenShift Data Foundation on IBM Power Red Hat Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_power/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_installation_and_configuration_guide/making-open-source-more-inclusive |
Chapter 11. Storage | Chapter 11. Storage Red Hat Virtualization uses a centralized storage system for virtual disks, ISO files and snapshots. Storage networking can be implemented using: Network File System (NFS) GlusterFS exports Other POSIX compliant file systems Internet Small Computer System Interface (iSCSI) Local storage attached directly to the virtualization hosts Fibre Channel Protocol (FCP) Parallel NFS (pNFS) Setting up storage is a prerequisite for a new data center because a data center cannot be initialized unless storage domains are attached and activated. As a Red Hat Virtualization system administrator, you need to create, configure, attach and maintain storage for the virtualized enterprise. You should be familiar with the storage types and their use. Read your storage array vendor's guides, and see the Red Hat Enterprise Linux Storage Administration Guide for more information on the concepts, protocols, requirements or general usage of storage. To add storage domains you must be able to successfully access the Administration Portal, and there must be at least one host connected with a status of Up . Red Hat Virtualization has three types of storage domains: Data Domain: A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center. In addition, snapshots of the virtual machines are also stored in the data domain. The data domain cannot be shared across data centers. Data domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, provided they are all shared, rather than local, domains. You must attach a data domain to a data center before you can attach domains of other types to it. ISO Domain: ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. An ISO domain removes the data center's need for physical media. An ISO domain can be shared across different data centers. ISO domains can only be NFS-based. Only one ISO domain can be added to a data center. Export Domain: Export domains are temporary storage repositories that are used to copy and move images between data centers and Red Hat Virtualization environments. Export domains can be used to backup virtual machines. An export domain can be moved between data centers, however, it can only be active in one data center at a time. Export domains can only be NFS-based. Only one export domain can be added to a data center. Note The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. See Section 11.7, "Importing Existing Storage Domains" for information on importing storage domains. Important Only commence configuring and attaching storage for your Red Hat Virtualization environment once you have determined the storage needs of your data center(s). 11.1. Understanding Storage Domains A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates and virtual machines (including snapshots), or ISO files. A storage domain can be made of block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems). By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO. On NFS, all virtual disks, templates, and snapshots are files. On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks. See Red Hat Enterprise Linux Logical Volume Manager Administration Guide for more information on LVM. Virtual disks can have one of two formats, either QCOW2 or raw. The type of storage can be sparse or preallocated. Snapshots are always sparse but can be taken for disks of either format. Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-storage |
Chapter 3. Red Hat Ceph Storage installation | Chapter 3. Red Hat Ceph Storage installation As a storage administrator, you can use the cephadm utility to deploy new Red Hat Ceph Storage clusters. The cephadm utility manages the entire life cycle of a Ceph cluster. Installation and management tasks comprise two types of operations: Day One operations involve installing and bootstrapping a bare-minimum, containerized Ceph storage cluster, running on a single node. Day One also includes deploying the Monitor and Manager daemons and adding Ceph OSDs. Day Two operations use the Ceph orchestration interface, cephadm orch , or the Red Hat Ceph Storage Dashboard to expand the storage cluster by adding other Ceph services to the storage cluster. Prerequisites At least one running virtual machine (VM) or bare-metal server with an active internet connection. Red Hat Enterprise Linux 9.2 with ansible-core bundled into AppStream. A valid Red Hat subscription with the appropriate entitlements. Root-level access to all nodes. An active Red Hat Network (RHN) or service account to access the Red Hat Registry. Remove troubling configurations in iptables so that refresh of iptables services does not cause issues to the cluster. For an example, refer to the Verifying firewall rules are configured for default Ceph ports section of the Red Hat Ceph Storage Configuration Guide . 3.1. The cephadm utility The cephadm utility deploys and manages a Ceph storage cluster. It is tightly integrated with both the command-line interface (CLI) and the Red Hat Ceph Storage Dashboard web interface so that you can manage storage clusters from either environment. cephadm uses SSH to connect to hosts from the manager daemon to add, remove, or update Ceph daemon containers. It does not rely on external configuration or orchestration tools such as Ansible or Rook. Note The cephadm utility is available after running the preflight playbook on a host. The cephadm utility consists of two main components: The cephadm shell. The cephadm orchestrator. The cephadm shell The cephadm shell starts a bash shell within a container. Use the shell to complete "Day One" cluster setup tasks, such as installation and bootstrapping, and to use ceph commands. For more information about how to start the cephadm shell, see Starting the cephadm shell . The cephadm orchestrator Use the cephadm orchestrator to perform "Day Two" Ceph functions, such as expanding the storage cluster and provisioning Ceph daemons and services. You can use the cephadm orchestrator through either the command-line interface (CLI) or the web-based Red Hat Ceph Storage Dashboard. Orchestrator commands take the form ceph orch . The cephadm script interacts with the Ceph orchestration module used by the Ceph Manager. 3.2. How cephadm works The cephadm command manages the full lifecycle of a Red Hat Ceph Storage cluster. The cephadm command can perform the following operations: Bootstrap a new Red Hat Ceph Storage cluster. Launch a containerized shell that works with the Red Hat Ceph Storage command-line interface (CLI). Aid in debugging containerized daemons. The cephadm command uses ssh to communicate with the nodes in the storage cluster. This allows you to add, remove, or update Red Hat Ceph Storage containers without using external tools. Generate the ssh key pair during the bootstrapping process, or use your own ssh key. The cephadm bootstrapping process creates a small storage cluster on a single node, consisting of one Ceph Monitor and one Ceph Manager, as well as any required dependencies. You then use the orchestrator CLI or the Red Hat Ceph Storage Dashboard to expand the storage cluster to include nodes, and to provision all of the Red Hat Ceph Storage daemons and services. You can perform management functions through the CLI or from the Red Hat Ceph Storage Dashboard web interface. 3.3. The cephadm-ansible playbooks The cephadm-ansible package is a collection of Ansible playbooks to simplify workflows that are not covered by cephadm . After installation, the playbooks are located in /usr/share/cephadm-ansible/ . The cephadm-ansible package includes the following playbooks: cephadm-preflight.yml cephadm-clients.yml cephadm-purge-cluster.yml The cephadm-preflight playbook Use the cephadm-preflight playbook to initially setup hosts before bootstrapping the storage cluster and before adding new nodes or clients to your storage cluster. This playbook configures the Ceph repository and installs some prerequisites such as podman , lvm2 , chrony , and cephadm . The cephadm-clients playbook Use the cephadm-clients playbook to set up client hosts. This playbook handles the distribution of configuration and keyring files to a group of Ceph clients. The cephadm-purge-cluster playbook Use the cephadm-purge-cluster playbook to remove a Ceph cluster. This playbook purges a Ceph cluster managed with cephadm. Additional Resources For more information about the cephadm-preflight playbook, see Running the preflight playbook . For more information about the cephadm-clients playbook, see Running the cephadm-clients playbook . For more information about the cephadm-purge-cluster playbook, see Purging the Ceph storage cluster . 3.4. Registering the Red Hat Ceph Storage nodes to the CDN and attaching subscriptions For full compatibility information, see Compatibility Guide . Prerequisites At least one running virtual machine (VM) or bare-metal server with an active internet connection. Red Hat Enterprise Linux 9.4 or 9.5 with ansible-core bundled into AppStream.. A valid Red Hat subscription with the appropriate entitlements. Root-level access to all nodes. Procedure Register the node, and when prompted, enter your Red Hat Customer Portal credentials: Syntax Pull the latest subscription data from the CDN: Syntax List all available subscriptions for Red Hat Ceph Storage: Syntax Identify the appropriate subscription and retrieve its Pool ID. Attach a pool ID to gain access to the software entitlements. Use the Pool ID you identified in the step. Syntax Disable the default software repositories, and then enable the server and the extras repositories on the respective version of Red Hat Enterprise Linux: Red Hat Enterprise Linux 9 Update the system to receive the latest packages for Red Hat Enterprise Linux: Syntax Subscribe to Red Hat Ceph Storage 8 content. Follow the instructions in How to Register Ceph with Red Hat Satellite 6 . Enable the ceph-tools repository: Red Hat Enterprise Linux 9 Repeat the above steps on all nodes you are adding to the cluster. Install cephadm-ansible : Syntax 3.5. Configuring Ansible inventory location You can configure inventory location files for the cephadm-ansible staging and production environments. The Ansible inventory hosts file contains all the hosts that are part of the storage cluster. You can list nodes individually in the inventory hosts file or you can create groups such as [mons] , [osds] , and [rgws] to provide clarity to your inventory and ease the usage of the --limit option to target a group or node when running a playbook. Note If deploying clients, client nodes must be defined in a dedicated [clients] group. Prerequisites An Ansible administration node. Root-level access to the Ansible administration node. The cephadm-ansible package is installed on the node. Procedure Navigate to the /usr/share/cephadm-ansible/ directory: Optional: Create subdirectories for staging and production: Optional: Edit the ansible.cfg file and add the following line to assign a default inventory location: Optional: Create an inventory hosts file for each environment: Open and edit each hosts file and add the nodes and [admin] group: Replace NODE_NAME_1 and NODE_NAME_2 with the Ceph nodes such as monitors, OSDs, MDSs, and gateway nodes. Replace ADMIN_NODE_NAME_1 with the name of the node where the admin keyring is stored. Example Note If you set the inventory location in the ansible.cfg file to staging, you need to run the playbooks in the staging environment as follows: Syntax To run the playbooks in the production environment: Syntax 3.6. Enabling SSH login as root user on Red Hat Enterprise Linux 9 Red Hat Enterprise Linux 9 does not support SSH login as a root user even if PermitRootLogin parameter is set to yes in the /etc/ssh/sshd_config file. You get the following error: Example You can run one of the following methods to enable login as a root user: Use "Allow root SSH login with password" flag while setting the root password during installation of Red Hat Enterprise Linux 9. Manually set the PermitRootLogin parameter after Red Hat Enterprise Linux 9 installation. This section describes manual setting of the PermitRootLogin parameter. Prerequisites Root-level access to all nodes. Procedure Open the etc/ssh/sshd_config file and set the PermitRootLogin to yes : Example Restart the SSH service: Example Login to the node as the root user: Syntax Replace HOST_NAME with the host name of the Ceph node. Example Enter the root password when prompted. Additional Resources For more information, see the Not able to login as root user via ssh in RHEL 9 server Knowledgebase solution. 3.7. Creating an Ansible user with sudo access You can create an Ansible user with password-less root access on all nodes in the storage cluster to run the cephadm-ansible playbooks. The Ansible user must be able to log into all the Red Hat Ceph Storage nodes as a user that has root privileges to install software and create configuration files without prompting for a password. Prerequisites Root-level access to all nodes. For Red Hat Enterprise Linux 9, to log in as a root user, see Enabling SSH log in as root user on Red Hat Enterprise Linux 9 Procedure Log in to the node as the root user: Syntax Replace HOST_NAME with the host name of the Ceph node. Example Enter the root password when prompted. Create a new Ansible user: Syntax Replace USER_NAME with the new user name for the Ansible user. Example Important Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks. Set a new password for this user: Syntax Replace USER_NAME with the new user name for the Ansible user. Example Enter the new password twice when prompted. Configure sudo access for the newly created user: Syntax Replace USER_NAME with the new user name for the Ansible user. Example Assign the correct file permissions to the new file: Syntax Replace USER_NAME with the new user name for the Ansible user. Example Repeat the above steps on all nodes in the storage cluster. Additional Resources For more information about creating user accounts, see the Getting started with managing user accounts section in the Configuring basic system settings chapter of the Red Hat Enterprise Linux 9 guide. 3.8. Enabling password-less SSH for Ansible Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password. Prerequisites Access to the Ansible administration node. Ansible user with sudo access to all nodes in the storage cluster. For Red Hat Enterprise Linux 9, to log in as a root user, see Enabling SSH log in as root user on Red Hat Enterprise Linux 9 Procedure Generate the SSH key pair, accept the default file name and leave the passphrase empty: Copy the public key to all nodes in the storage cluster: Replace USER_NAME with the new user name for the Ansible user. Replace HOST_NAME with the host name of the Ceph node. Example Create the user's SSH config file: Open for editing the config file. Set values for the Hostname and User options for each node in the storage cluster: Replace HOST_NAME with the host name of the Ceph node. Replace USER_NAME with the new user name for the Ansible user. Example Important By configuring the ~/.ssh/config file you do not have to specify the -u USER_NAME option each time you execute the ansible-playbook command. Set the correct file permissions for the ~/.ssh/config file: Additional Resources The ssh_config(5) manual page. See Using secure communications between two systems with OpenSSH . 3.9. Running the preflight playbook This Ansible playbook configures the Ceph repository and prepares the storage cluster for bootstrapping. It also installs some prerequisites, such as podman , lvm2 , chrony , and cephadm . The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible . The preflight playbook uses the cephadm-ansible inventory file to identify all the admin and nodes in the storage cluster. The default location for the inventory file is /usr/share/cephadm-ansible/hosts . The following example shows the structure of a typical inventory file: Example The [admin] group in the inventory file contains the name of the node where the admin keyring is stored. On a new storage cluster, the node in the [admin] group will be the bootstrap node. To add additional admin hosts after bootstrapping the cluster see Setting up the admin node in the Installation Guide for more information. Note Run the preflight playbook before you bootstrap the initial host. Important If you are performing a disconnected installation, see Running the preflight playbook for a disconnected installation . Prerequisites Root-level access to the Ansible administration node. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. Note In the below example, host01 is the bootstrap node. Procedure Navigate to the the /usr/share/cephadm-ansible directory. Open and edit the hosts file and add your nodes: Example Run the preflight playbook: Syntax Example After installation is complete, cephadm resides in the /usr/sbin/ directory. Use the --limit option to run the preflight playbook on a selected set of hosts in the storage cluster: Syntax Replace GROUP_NAME with a group name from your inventory file. Replace NODE_NAME with a specific node name from your inventory file. Note Optionally, you can group your nodes in your inventory file by group name such as [mons] , [osds] , and [mgrs] . However, admin nodes must be added to the [admin] group and clients must be added to the [clients] group. Example When you run the preflight playbook, cephadm-ansible automatically installs chrony and ceph-common on the client nodes. The preflight playbook installs chrony but configures it for a single NTP source. If you want to configure multiple sources or if you have a disconnected environment, see the following documentation for more information: How to configure chrony? Best practices for NTP . Basic chrony NTP troubleshooting . 3.10. Bootstrapping a new storage cluster The cephadm utility performs the following tasks during the bootstrap process: Installs and starts a Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node as containers. Creates the /etc/ceph directory. Writes a copy of the public key to /etc/ceph/ceph.pub for the Red Hat Ceph Storage cluster and adds the SSH key to the root user's /root/.ssh/authorized_keys file. Applies the _admin label to the bootstrap node. Writes a minimal configuration file needed to communicate with the new cluster to /etc/ceph/ceph.conf . Writes a copy of the client.admin administrative secret key to /etc/ceph/ceph.client.admin.keyring . Deploys a basic monitoring stack with prometheus, grafana, and other tools such as node-exporter and alert-manager . Important If you are performing a disconnected installation, see Performing a disconnected installation . Note If you have existing prometheus services that you want to run with the new storage cluster, or if you are running Ceph with Rook, use the --skip-monitoring-stack option with the cephadm bootstrap command. This option bypasses the basic monitoring stack so that you can manually configure it later. Important If you are deploying a monitoring stack, see Deploying the monitoring stack using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide . Important Bootstrapping provides the default user name and password for the initial login to the Dashboard. Bootstrap requires you to change the password after you log in. Important Before you begin the bootstrapping process, make sure that the container image that you want to use has the same version of Red Hat Ceph Storage as cephadm . If the two versions do not match, bootstrapping fails at the Creating initial admin user stage. Note Before you begin the bootstrapping process, you must create a username and password for the registry.redhat.io container registry. For more information about Red Hat container registry authentication, see the knowledge base article Red Hat Container Registry Authentication Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . Root-level access to all nodes. Note If the storage cluster includes multiple networks and interfaces, be sure to choose a network that is accessible by any node that uses the storage cluster. Note If the local node uses fully-qualified domain names (FQDN), then add the --allow-fqdn-hostname option to cephadm bootstrap on the command line. Important Run cephadm bootstrap on the node that you want to be the initial Monitor node in the cluster. The IP_ADDRESS option should be the IP address of the node you are using to run cephadm bootstrap . Note If you want to deploy a storage cluster using IPV6 addresses, then use the IPV6 address format for the --mon-ip IP_ADDRESS option. For example: cephadm bootstrap --mon-ip 2620:52:0:880:225:90ff:fefc:2536 --registry-json /etc/mylogin.json Procedure Bootstrap a storage cluster: Syntax Example Note If you want internal cluster traffic routed over the public network, you can omit the --cluster-network NETWORK_CIDR option. The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry. Additional Resources For more information about the recommended bootstrap command options, see Recommended cephadm bootstrap command options . For more information about the options available for the bootstrap command, see Bootstrap command options . For information about using a JSON file to contain login credentials for the bootstrap process, see Using a JSON file to protect login information . 3.10.1. Recommended cephadm bootstrap command options The cephadm bootstrap command has multiple options that allow you to specify file locations, configure ssh settings, set passwords, and perform other initial configuration tasks. Red Hat recommends that you use a basic set of command options for cephadm bootstrap . You can configure additional options after your initial cluster is up and running. The following examples show how to specify the recommended options. Syntax Example Additional Resources For more information about the --registry-json option, see Using a JSON file to protect login information . For more information about all available cephadm bootstrap options, see Bootstrap command options . For more information about bootstrapping the storage cluster as a non-root user, see Bootstrapping the storage cluster as a non-root user . 3.10.2. Using a JSON file to protect login information As a storage administrator, you might choose to add login and password information to a JSON file, and then refer to the JSON file for bootstrapping. This protects the login credentials from exposure. Note You can also use a JSON file with the cephadm --registry-login command. Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . Root-level access to all nodes. Procedure Create the JSON file. In this example, the file is named mylogin.json . Syntax Example Bootstrap a storage cluster: Syntax Example 3.10.3. Bootstrapping a storage cluster using a service configuration file To bootstrap the storage cluster and configure additional hosts and daemons using a service configuration file, use the --apply-spec option with the cephadm bootstrap command. The configuration file is a .yaml file that contains the service type, placement, and designated nodes for services that you want to deploy. Note If you want to use a non-default realm or zone for applications such as multi-site, configure your Ceph Object Gateway daemons after you bootstrap the storage cluster, instead of adding them to the configuration file and using the --apply-spec option. This gives you the opportunity to create the realm or zone you need for the Ceph Object Gateway daemons before deploying them. See the Red Hat Ceph Storage Operations Guide for more information. Note If deploying a NFS-Ganesha gateway, or Metadata Server (MDS) service, configure them after bootstrapping the storage cluster. To deploy a Ceph NFS-Ganesha gateway, you must create a RADOS pool first. To deploy the MDS service, you must create a CephFS volume first. See the Red Hat Ceph Storage Operations Guide for more information. Prerequisites At least one running virtual machine (VM) or server. Red Hat Enterprise Linux 9.4 or 9.5 with ansible-core bundled into AppStream.. Root-level access to all nodes. Login access to registry.redhat.io . Passwordless ssh is set up on all hosts in the storage cluster. cephadm is installed on the node that you want to be the initial Monitor node in the storage cluster. Procedure Log in to the bootstrap host. Create the service configuration .yaml file for your storage cluster. The example file directs cephadm bootstrap to configure the initial host and two additional hosts, and it specifies that OSDs be created on all available disks. Example Bootstrap the storage cluster with the --apply-spec option: Syntax Example The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry. Once your storage cluster is up and running, see the Red Hat Ceph Storage Operations Guide for more information about configuring additional daemons and services. Additional Resources For more information about the options available for the bootstrap command, see the Bootstrap command options . 3.10.4. Bootstrapping the storage cluster as a non-root user You can bootstrap the storage cluster as a non-root user if you have passwordless sudo privileges. To bootstrap the Red Hat Ceph Storage cluster as a non-root user on the bootstrap node, use the --ssh-user option with the cephadm bootstrap command. --ssh-user specifies a user for SSH connections to cluster nodes. Non-root users must have passwordless sudo access. Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the initial Monitor node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . Optional: SSH public and private keys. Passwordless sudo access to the bootstrap node. Non-root users have passwordless sudo access on all nodes intended to be part of the cluster. cephadm installed on the node that you want to be the initial Monitor node in the storage cluster. Procedure Change to sudo on the bootstrap node: Syntax Example Check the SSH connection to the bootstrap node: Example Optional: Invoke the cephadm bootstrap command. Note Using private and public keys is optional. If SSH keys have not previously been created, these can be created during this step. Include the --ssh-private-key and --ssh-public-key options: Syntax Example Additional Resources For more information about all available cephadm bootstrap options, see Bootstrap command options . For more information about utilizing Ansible to automate bootstrapping a rootless cluster, see the knowledge base article Red Hat Ceph Storage 6 rootless deployment utilizing ansible ad-hoc commands . For more information about sudo privileges, see Managing sudo access . 3.10.5. Bootstrap command options The cephadm bootstrap command bootstraps a Ceph storage cluster on the local host. It deploys a MON daemon and a MGR daemon on the bootstrap node, automatically deploys the monitoring stack on the local host, and calls ceph orch host add HOSTNAME . The following table lists the available options for cephadm bootstrap . cephadm bootstrap option Description --config CONFIG_FILE , -c CONFIG_FILE CONFIG_FILE is the ceph.conf file to use with the bootstrap command --cluster-network NETWORK_CIDR Use the subnet defined by NETWORK_CIDR for internal cluster traffic. This is specified in CIDR notation. For example: 10.10.128.0/24 . --mon-id MON_ID Bootstraps on the host named MON_ID . Default value is the local host. --mon-addrv MON_ADDRV mon IPs (e.g., [v2:localipaddr:3300,v1:localipaddr:6789]) --mon-ip IP_ADDRESS IP address of the node you are using to run cephadm bootstrap . --mgr-id MGR_ID Host ID where a MGR node should be installed. Default: randomly generated. --fsid FSID Cluster FSID. --output-dir OUTPUT_DIR Use this directory to write config, keyring, and pub key files. --output-keyring OUTPUT_KEYRING Use this location to write the keyring file with the new cluster admin and mon keys. --output-config OUTPUT_CONFIG Use this location to write the configuration file to connect to the new cluster. --output-pub-ssh-key OUTPUT_PUB_SSH_KEY Use this location to write the public SSH key for the cluster. --skip-ssh Skip the setup of the ssh key on the local host. --initial-dashboard-user INITIAL_DASHBOARD_USER Initial user for the dashboard. --initial-dashboard-password INITIAL_DASHBOARD_PASSWORD Initial password for the initial dashboard user. --ssl-dashboard-port SSL_DASHBOARD_PORT Port number used to connect with the dashboard using SSL. --dashboard-key DASHBOARD_KEY Dashboard key. --dashboard-crt DASHBOARD_CRT Dashboard certificate. --ssh-config SSH_CONFIG SSH config. --ssh-private-key SSH_PRIVATE_KEY SSH private key. --ssh-public-key SSH_PUBLIC_KEY SSH public key. --ssh-user SSH_USER Sets the user for SSH connections to cluster hosts. Passwordless sudo is needed for non-root users. --skip-mon-network Sets mon public_network based on the bootstrap mon ip. --skip-dashboard Do not enable the Ceph Dashboard. --dashboard-password-noupdate Disable forced dashboard password change. --no-minimize-config Do not assimilate and minimize the configuration file. --skip-ping-check Do not verify that the mon IP is pingable. --skip-pull Do not pull the latest image before bootstrapping. --skip-firewalld Do not configure firewalld. --allow-overwrite Allow the overwrite of existing -output-* config/keyring/ssh files. --allow-fqdn-hostname Allow fully qualified host name. --skip-prepare-host Do not prepare host. --orphan-initial-daemons Do not create initial mon, mgr, and crash service specs. --skip-monitoring-stack Do not automatically provision the monitoring stack] (prometheus, grafana, alertmanager, node-exporter). --apply-spec APPLY_SPEC Apply cluster spec file after bootstrap (copy ssh key, add hosts and apply services). --registry-url REGISTRY_URL Specifies the URL of the custom registry to log into. For example: registry.redhat.io . --registry-username REGISTRY_USERNAME User name of the login account to the custom registry. --registry-password REGISTRY_PASSWORD Password of the login account to the custom registry. --registry-json REGISTRY_JSON JSON file containing registry login information. Additional Resources For more information about the --skip-monitoring-stack option, see Adding hosts . For more information about logging into the registry with the registry-json option, see help for the registry-login command. For more information about cephadm options, see help for cephadm . 3.10.6. Configuring a private registry for a disconnected installation You can use a disconnected installation procedure to install cephadm and bootstrap your storage cluster on a private network. A disconnected installation uses a private registry for installation. Use this procedure when the Red Hat Ceph Storage nodes do NOT have access to the Internet during deployment. Follow this procedure to set up a secure private registry using authentication and a self-signed certificate. Perform these steps on a node that has both Internet access and access to the local cluster. Note Using an insecure registry for production is not recommended. Prerequisites At least one running virtual machine (VM) or server with an active internet connection. Red Hat Enterprise Linux 9.4 or 9.5 with ansible-core bundled into AppStream.. Login access to registry.redhat.io . Root-level access to all nodes. Procedure Log in to the node that has access to both the public network and the cluster nodes. Register the node, and when prompted, enter the appropriate Red Hat Customer Portal credentials: Example Pull the latest subscription data: Example List all available subscriptions for Red Hat Ceph Storage: Example Copy the Pool ID from the list of available subscriptions for Red Hat Ceph Storage. Attach the subscription to get access to the software entitlements: Syntax Replace POOL_ID with the Pool ID identified in the step. Disable the default software repositories, and enable the server and the extras repositories: Red Hat Enterprise Linux 9 Install the podman and httpd-tools packages: Example Create folders for the private registry: Example The registry will be stored in /opt/registry and the directories are mounted in the container running the registry. The auth directory stores the htpasswd file the registry uses for authentication. The certs directory stores the certificates the registry uses for authentication. The data directory stores the registry images. Create credentials for accessing the private registry: Syntax The b option provides the password from the command line. The B option stores the password using Bcrypt encryption. The c option creates the htpasswd file. Replace PRIVATE_REGISTRY_USERNAME with the username to create for the private registry. Replace PRIVATE_REGISTRY_PASSWORD with the password to create for the private registry username. Example Create a self-signed certificate: Syntax Replace LOCAL_NODE_FQDN with the fully qualified host name of the private registry node. Note You will be prompted for the respective options for your certificate. The CN= value is the host name of your node and should be resolvable by DNS or the /etc/hosts file. Example Note When creating a self-signed certificate, be sure to create a certificate with a proper Subject Alternative Name (SAN). Podman commands that require TLS verification for certificates that do not include a proper SAN, return the following error: x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0 Create a symbolic link to domain.cert to allow skopeo to locate the certificate with the file extension .cert : Example Add the certificate to the trusted list on the private registry node: Syntax Replace LOCAL_NODE_FQDN with the FQDN of the private registry node. Example Copy the certificate to any nodes that will access the private registry for installation and update the trusted list: Example Start the local secure private registry: Syntax Replace NAME_OF_CONTAINER with a name to assign to the container. Example This starts the private registry on port 5000 and mounts the volumes of the registry directories in the container running the registry. On the local registry node, verify that registry.redhat.io is in the container registry search path. Open for editing the /etc/containers/registries.conf file, and add registry.redhat.io to the unqualified-search-registries list, if it does not exist: Example Login to registry.redhat.io with your Red Hat Customer Portal credentials: Syntax Copy the following Red Hat Ceph Storage 8 image, Prometheus images, and Dashboard image from the Red Hat Customer Portal to the private registry: Table 3.1. Custom image details for monitoring stack Monitoring stack component Image details Prometheus registry.redhat.io/openshift4/ose-prometheus:v4.15 Grafana registry.redhat.io/rhceph/grafana-rhel9:latest Node-exporter registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.15 AlertManager registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.15 HAProxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest Keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest SNMP Gateway registry.redhat.io/rhceph/snmp-notifier-rhel9:latest OAuth2 Proxy registry.redhat.io/rhceph/oauth2-proxy:v7.6 Syntax Replace CERTIFICATE_DIRECTORY_PATH with the directory path to the self-signed certificates. Replace RED_HAT_CUSTOMER_PORTAL_LOGIN and RED_HAT_CUSTOMER_PORTAL_PASSWORD with your Red Hat Customer Portal credentials. Replace PRIVATE_REGISTRY_USERNAME and PRIVATE_REGISTRY_PASSWORD with the private registry credentials. Replace SRC_IMAGE and SRC_TAG with the name and tag of the image to copy from registry.redhat.io. Replace DST_IMAGE and DST_TAG with the name and tag of the image to copy to the private registry. Replace LOCAL_NODE_FQDN with the FQDN of the private registry. Example Using the curl command, verify the images reside in the local registry: Syntax Example Additional Resources For more information on different image Ceph package versions, see the knowledge base solution for details on What are the Red Hat Ceph Storage releases and corresponding Ceph package versions? 3.10.7. Running the preflight playbook for a disconnected installation You use the cephadm-preflight.yml Ansible playbook to configure the Ceph repository and prepare the storage cluster for bootstrapping. It also installs some prerequisites, such as podman , lvm2 , chrony , and cephadm . The preflight playbook uses the cephadm-ansible inventory hosts file to identify all the nodes in the storage cluster. The default location for cephadm-ansible , cephadm-preflight.yml , and the inventory hosts file is /usr/share/cephadm-ansible/ . The following example shows the structure of a typical inventory file: Example The [admin] group in the inventory file contains the name of the node where the admin keyring is stored. Note Run the preflight playbook before you bootstrap the initial host. Prerequisites The cephadm-ansible package is installed on the Ansible administration node. Root-level access to all nodes in the storage cluster. Passwordless ssh is set up on all hosts in the storage cluster. Nodes configured to access a local YUM repository server with the following repositories enabled: rhel-9-for-x86_64-baseos-rpms rhel-9-for-x86_64-appstream-rpms rhceph-8-tools-for-rhel-9-x86_64-rpms Note For more information about setting up a local YUM repository, see the knowledge base article Creating a Local Repository and Sharing with Disconnected/Offline/Air-gapped Systems Procedure Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node. Open and edit the hosts file and add your nodes. Run the preflight playbook with the ceph_origin parameter set to custom to use a local YUM repository: Syntax Example After installation is complete, cephadm resides in the /usr/sbin/ directory. Note Populate the contents of the registries.conf file with the Ansible playbook: Syntax Example Alternatively, you can use the --limit option to run the preflight playbook on a selected set of hosts in the storage cluster: Syntax Replace GROUP_NAME with a group name from your inventory file. Replace NODE_NAME with a specific node name from your inventory file. Example Note When you run the preflight playbook, cephadm-ansible automatically installs chrony and ceph-common on the client nodes. 3.10.8. Performing a disconnected installation Before you can perform the installation, you must obtain a Red Hat Ceph Storage container image, either from a proxy host that has access to the Red Hat registry or by copying the image to your local registry. Note If your local registry uses a self-signed certificate with a local registry, ensure you have added the trusted root certificate to the bootstrap host. For more information, see Configuring a private registry for a disconnected installation . Important Before you begin the bootstrapping process, make sure that the container image that you want to use has the same version of Red Hat Ceph Storage as cephadm . If the two versions do not match, bootstrapping fails at the Creating initial admin user stage. Prerequisites At least one running virtual machine (VM) or server. Root-level access to all nodes. Passwordless ssh is set up on all hosts in the storage cluster. The preflight playbook has been run on the bootstrap host in the storage cluster. For more information, see Running the preflight playbook for a disconnected installation . A private registry has been configured and the bootstrap node has access to it. For more information, see Configuring a private registry for a disconnected installation . A Red Hat Ceph Storage container image resides in the custom registry. Procedure Log in to the bootstrap host. Bootstrap the storage cluster: Syntax Replace PRIVATE_REGISTRY_NODE_FQDN with the fully qualified domain name of your private registry. Replace CUSTOM_IMAGE_NAME and IMAGE_TAG with the name and tag of the Red Hat Ceph Storage container image that resides in the private registry. Replace IP_ADDRESS with the IP address of the node you are using to run cephadm bootstrap . Replace PRIVATE_REGISTRY_USERNAME with the username to create for the private registry. Replace PRIVATE_REGISTRY_PASSWORD with the password to create for the private registry username. Example The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry. After the bootstrap process is complete, see Changing configurations of custom container images for disconnected installations to configure the container images. Additional Resources Once your storage cluster is up and running, see the Red Hat Ceph Storage Operations Guide for more information about configuring additional daemons and services. 3.10.9. Changing configurations of custom container images for disconnected installations After you perform the initial bootstrap for disconnected nodes, you must specify custom container images for monitoring stack daemons. You can override the default container images for monitoring stack daemons, since the nodes do not have access to the default container registry. Note Make sure that the bootstrap process on the initial host is complete before making any configuration changes. By default, the monitoring stack components are deployed based on the primary Ceph image. For disconnected environment of the storage cluster, you can use the latest available monitoring stack component images. Note When using a custom registry, be sure to log in to the custom registry on newly added nodes before adding any Ceph daemons. Syntax Example Prerequisites At least one running virtual machine (VM) or server. Red Hat Enterprise Linux 9.4 or 9.5 with ansible-core bundled into AppStream.. Root-level access to all nodes. Passwordless ssh is set up on all hosts in the storage cluster. Procedure Set the custom container images with the ceph config command: Syntax Use the following options for OPTION_NAME : Example Redeploy node-exporter : Syntax Note If any of the services do not deploy, you can redeploy them with the ceph orch redeploy command. Note By setting a custom image, the default values for the configuration image name and tag will be overridden, but not overwritten. The default values change when updates become available. By setting a custom image, you will not be able to configure the component for which you have set the custom image for automatic updates. You will need to manually update the configuration image name and tag to be able to install updates. If you choose to revert to using the default configuration, you can reset the custom container image. Use ceph config rm to reset the configuration option: Syntax Example Additional Resources For more information about performing a disconnected installation, see Performing a disconnected installation . 3.11. Distributing SSH keys You can use the cephadm-distribute-ssh-key.yml playbook to distribute the SSH keys instead of creating and distributing the keys manually. The playbook distributes an SSH public key over all hosts in the inventory. You can also generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password. Prerequisites Ansible is installed on the administration node. Access to the Ansible administration node. Ansible user with sudo access to all nodes in the storage cluster. Bootstrapping is completed. See the Bootstrapping a new storage cluster section in the Red Hat Ceph Storage Installation Guide . Procedure Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example From the Ansible administration node, distribute the SSH keys. The optional cephadm_pubkey_path parameter is the full path name of the SSH public key file on the ansible controller host. Note If cephadm_pubkey_path is not specified, the playbook gets the key from the cephadm get-pub-key command. This implies that you have at least bootstrapped a minimal cluster. Syntax Example 3.12. Starting the cephadm shell The cephadm shell command opens a bash shell in a container with all Ceph packages installed. Use the shell to run "Day One" cluster setup tasks, such as installation and bootstrapping, and to run ceph commands. Note If the node contains configuration and keyring files in /etc/ceph/ , the container environment uses the values in those files as defaults for the cephadm shell. If you execute the cephadm shell on a MON node, the cephadm shell inherits its default configuration from the MON container, instead of using the default configuration. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Open the cephadm shell in one of the following ways: Enter cephadm shell at the system prompt. This example runs the ceph -s command from within the shell. Example At the system prompt, type cephadm shell and the command you want to run: Example Note To exit the cephadm shell, use the exit command. 3.13. Verifying the cluster installation Once the cluster installation is complete, you can verify that the Red Hat Ceph Storage 8 installation is running properly. There are two ways of verifying the storage cluster installation as a root user: Run the podman ps command. Run the cephadm shell ceph -s . Prerequisites Root-level access to all nodes in the storage cluster. Procedure Run the podman ps command: Example Note In Red Hat Ceph Storage 8, the format of the systemd units has changed. In the NAMES column, the unit files now include the FSID . Run the cephadm shell ceph -s command: Example Note The health of the storage cluster is in HEALTH_WARN status as the hosts and the daemons are not added. 3.14. Adding hosts Bootstrapping the Red Hat Ceph Storage installation creates a working storage cluster, consisting of one Monitor daemon and one Manager daemon within the same container. As a storage administrator, you can add additional hosts to the storage cluster and configure them. Note Running the preflight playbook installs podman , lvm2 , chrony , and cephadm on all hosts listed in the Ansible inventory file. When using a custom registry, be sure to log in to the custom registry on newly added nodes before adding any Ceph daemons. Prerequisites A running Red Hat Ceph Storage cluster. Root-level or user with sudo access to all nodes in the storage cluster. Register the nodes to the CDN and attach subscriptions. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. Procedure + Note In the following procedure, use either root , as indicated, or the username with which the user is bootstrapped. From the node that contains the admin keyring, install the storage cluster's public SSH key in the root user's authorized_keys file on the new host: Syntax Example Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node. Example From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is /usr/share/cephadm-ansible/hosts . The following example shows the structure of a typical inventory file: Example Note If you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 4. Run the preflight playbook with the --limit option: Syntax Example The preflight playbook installs podman , lvm2 , chrony , and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory. From the bootstrap node, use the cephadm orchestrator to add the new host to the storage cluster: Syntax Example Optional: You can also add nodes by IP address, before and after you run the preflight playbook. If you do not have DNS configured in your storage cluster environment, you can add the hosts by IP address, along with the host names. Syntax Example Verification View the status of the storage cluster and verify that the new host has been added. The STATUS of the hosts is blank, in the output of the ceph orch host ls command. Example Additional Resources See the Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions section in the Red Hat Ceph Storage Installation Guide . See the Creating an Ansible user with sudo access section in the Red Hat Ceph Storage Installation Guide . 3.14.1. Using the addr option to identify hosts The addr option offers an additional way to contact a host. Add the IP address of the host to the addr option. If ssh cannot connect to the host by its hostname, then it uses the value stored in addr to reach the host by its IP address. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Run this procedure from inside the cephadm shell. Add the IP address: Syntax Example Note If adding a host by hostname results in that host being added with an IPv6 address instead of an IPv4 address, use ceph orch host to specify the IP address of that host: To convert the IP address from IPv6 format to IPv4 format for a host you have added, use the following command: 3.14.2. Adding multiple hosts Use a YAML file to add multiple hosts to the storage cluster at the same time. Note Be sure to create the hosts.yaml file within a host container, or create the file on the local host and then use the cephadm shell to mount the file within the container. The cephadm shell automatically places mounted files in /mnt . If you create the file directly on the local host and then apply the hosts.yaml file instead of mounting it, you might see a File does not exist error. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Copy over the public ssh key to each of the hosts that you want to add. Use a text editor to create a hosts.yaml file. Add the host descriptions to the hosts.yaml file, as shown in the following example. Include the labels to identify placements for the daemons that you want to deploy on each host. Separate each host description with three dashes (---). Example If you created the hosts.yaml file within the host container, invoke the ceph orch apply command: Example If you created the hosts.yaml file directly on the local host, use the cephadm shell to mount the file: Example View the list of hosts and their labels: Example Note If a host is online and operating normally, its status is blank. An offline host shows a status of OFFLINE, and a host in maintenance mode shows a status of MAINTENANCE. 3.14.3. Adding hosts in disconnected deployments If you are running a storage cluster on a private network and your host domain name server (DNS) cannot be reached through private IP, you must include both the host name and the IP address for each host you want to add to the storage cluster. Prerequisites A running storage cluster. Root-level access to all hosts in the storage cluster. Procedure Invoke the cephadm shell. Syntax Add the host: Syntax Example 3.14.4. Removing hosts You can remove hosts of a Ceph cluster with the Ceph Orchestrators. All the daemons are removed with the drain option which adds the _no_schedule label to ensure that you cannot deploy any daemons or a cluster till the operation is complete. Important If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the storage cluster. All the services are deployed. Cephadm is deployed on the nodes where the services have to be removed. Procedure Log into the Cephadm shell: Example Fetch the host details: Example Drain all the daemons from the host: Syntax Example The _no_schedule label is automatically applied to the host which blocks deployment. Check the status of OSD removal: Example When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster. Check if all the daemons are removed from the storage cluster: Syntax Example Remove the host: Syntax Example Additional Resources See the Adding hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. 3.15. Labeling hosts The Ceph orchestrator supports assigning labels to hosts. Labels are free-form and have no specific meanings. This means that you can use mon , monitor , mycluster_monitor , or any other text string. Each host can have multiple labels. For example, apply the mon label to all hosts on which you want to deploy Ceph Monitor daemons, mgr for all hosts on which you want to deploy Ceph Manager daemons, rgw for Ceph Object Gateway daemons, and so on. Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels. 3.15.1. Adding a label to a host Use the Ceph Orchestrator to add a label to a host. Labels can be used to specify placement of daemons. A few examples of labels are mgr , mon , and osd based on the service deployed on the hosts. Each host can have multiple labels. You can also add the following host labels that have special meaning to cephadm and they begin with _ : _no_schedule : This label prevents cephadm from scheduling or deploying daemons on the host. If it is added to an existing host that already contains Ceph daemons, it causes cephadm to move those daemons elsewhere, except OSDs which are not removed automatically. When a host is added with the _no_schedule label, no daemons are deployed on it. When the daemons are drained before the host is removed, the _no_schedule label is set on that host. _no_autotune_memory : This label does not autotune memory on the host. It prevents the daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled for one or more daemons on that host. _admin : By default, the _admin label is applied to the bootstrapped host in the storage cluster and the client.admin key is set to be distributed to that host with the ceph orch client-keyring {ls|set|rm} function. Adding this label to additional hosts normally causes cephadm to deploy configuration and keyring files in the /etc/ceph directory. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Hosts are added to the storage cluster. Procedure Log in to the Cephadm shell: Example Add a label to a host: Syntax Example Verification List the hosts: Example 3.15.2. Removing a label from a host You can use the Ceph orchestrator to remove a label from a host. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Launch the cephadm shell: Remove the label. Syntax Example Verification List the hosts: Example 3.15.3. Using host labels to deploy daemons on specific hosts You can use host labels to deploy daemons to specific hosts. There are two ways to use host labels to deploy daemons on specific hosts: By using the --placement option from the command line. By using a YAML file. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Log into the Cephadm shell: Example List current hosts and labels: Example Method 1 : Use the --placement option to deploy a daemon from the command line: Syntax Example Method 2 To assign the daemon to a specific host label in a YAML file, specify the service type and label in the YAML file: Create the placement.yml file: Example Specify the service type and label in the placement.yml file: Example Apply the daemon placement file: Syntax Example Verification List the status of the daemons: Syntax Example 3.16. Adding Monitor service A typical Red Hat Ceph Storage storage cluster has three or five monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Monitor nodes. Note In the case of a firewall, see the Firewall settings for Ceph Monitor node section of the Red Hat Ceph Storage Configuration Guide for details. Note The bootstrap node is the initial monitor of the storage cluster. Be sure to include the bootstrap node in the list of hosts to which you want to deploy. Note If you want to apply Monitor service to more than one specific host, be sure to specify all of the host names within the same ceph orch apply command. If you specify ceph orch apply mon --placement host1 and then specify ceph orch apply mon --placement host2 , the second command removes the Monitor service on host1 and applies a Monitor service to host2. If your Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Monitor daemons as you add new hosts to the cluster. cephadm automatically configures the Monitor daemons on the new hosts. The new hosts reside on the same subnet as the first (bootstrap) host in the storage cluster. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster. Prerequisites Root-level access to all hosts in the storage cluster. A running storage cluster. Procedure Apply the five Monitor daemons to five random hosts in the storage cluster: Disable automatic Monitor deployment: 3.16.1. Adding Monitor nodes to specific hosts Use host labels to identify the hosts that contain Monitor nodes. Prerequisites Root-level access to all nodes in the storage cluster. A running storage cluster. Procedure Assign the mon label to the host: Syntax Example View the current hosts and labels: Syntax Example Deploy monitors based on the host label: Syntax Deploy monitors on a specific set of hosts: Syntax Example Note Be sure to include the bootstrap node in the list of hosts to which you want to deploy. 3.17. Setting up a custom SSH key on an existing cluster As a storage administrator, with Cephadm, you can use an SSH key to securely authenticate with remote hosts. The SSH key is stored in the monitor to connect to remote hosts. When the cluster is bootstrapped, this SSH key is generated automatically and no additional configuration is necessary. However, you can generate a new SSH key with the ceph cephadm generate-key command. Prerequisites An Ansible administration node. Root-level access to the Ansible administration node. The cephadm-ansible package is installed on the node. Procedure Navigate to the cephadm-ansible directory. Generate a new SSH key: Example Retrieve the public portion of the SSH key: Example Delete the currently stored SSH key: Example Restart the mgr daemon to reload the configuration: Example 3.17.1. Configuring a different SSH user As a storage administrator, you can configure a non-root SSH user who can log into all the Ceph cluster nodes with enough privileges to download container images, start containers, and execute commands without prompting for a password. Important Prior to configuring a non-root SSH user, the cluster SSH key needs to be added to the user's authorized_keys file and non-root users must have passwordless sudo access. Prerequisites A running Red Hat Ceph Storage cluster. An Ansible administration node. Root-level access to the Ansible administration node. The cephadm-ansible package is installed on the node. Add the cluster SSH keys to the user's authorized_keys . Enable passwordless sudo access for the non-root users. Procedure Navigate to the cephadm-ansible directory. Provide Cephadm the name of the user who is going to perform all the Cephadm operations: Syntax Example Retrieve the SSH public key. Syntax Example Copy the SSH keys to all the hosts. Syntax Example 3.18. Setting up the admin node Use an admin node to administer the storage cluster. An admin node contains both the cluster configuration file and the admin keyring. Both of these files are stored in the directory /etc/ceph and use the name of the storage cluster as a prefix. For example, the default ceph cluster name is ceph . In a cluster using the default name, the admin keyring is named /etc/ceph/ceph.client.admin.keyring . The corresponding cluster configuration file is named /etc/ceph/ceph.conf . To set up additional hosts in the storage cluster as admin nodes, apply the _admin label to the host you want to designate as an administrator node. Note By default, after applying the _admin label to a node, cephadm copies the ceph.conf and client.admin keyring files to that node. The _admin label is automatically applied to the bootstrap node unless the --skip-admin-label option was specified with the cephadm bootstrap command. Prerequisites A running storage cluster with cephadm installed. The storage cluster has running Monitor and Manager nodes. Root-level access to all nodes in the cluster. Procedure Use ceph orch host ls to view the hosts in your storage cluster: Example Use the _admin label to designate the admin host in your storage cluster. For best results, this host should have both Monitor and Manager daemons running. Syntax Example Verify that the admin host has the _admin label. Example Log in to the admin node to manage the storage cluster. 3.18.1. Deploying Ceph monitor nodes using host labels A typical Red Hat Ceph Storage storage cluster has three or five Ceph Monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Ceph Monitor nodes. If your Ceph Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Ceph Monitor daemons as you add new nodes to the cluster. cephadm automatically configures the Ceph Monitor daemons on the new nodes. The new nodes reside on the same subnet as the first (bootstrap) node in the storage cluster. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster. Note Use host labels to identify the hosts that contain Ceph Monitor nodes. Prerequisites Root-level access to all nodes in the storage cluster. A running storage cluster. Procedure Assign the mon label to the host: Syntax Example View the current hosts and labels: Syntax Example Deploy Ceph Monitor daemons based on the host label: Syntax Deploy Ceph Monitor daemons on a specific set of hosts: Syntax Example Note Be sure to include the bootstrap node in the list of hosts to which you want to deploy. 3.18.2. Adding Ceph Monitor nodes by IP address or network name A typical Red Hat Ceph Storage storage cluster has three or five monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Monitor nodes. If your Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Monitor daemons as you add new nodes to the cluster. You do not need to configure the Monitor daemons on the new nodes. The new nodes reside on the same subnet as the first node in the storage cluster. The first node in the storage cluster is the bootstrap node. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster. Prerequisites Root-level access to all nodes in the storage cluster. A running storage cluster. Procedure To deploy each additional Ceph Monitor node: Syntax Example 3.19. Adding Manager service cephadm automatically installs a Manager daemon on the bootstrap node during the bootstrapping process. Use the Ceph orchestrator to deploy additional Manager daemons. The Ceph orchestrator deploys two Manager daemons by default. To deploy a different number of Manager daemons, specify a different number. If you do not specify the hosts where the Manager daemons should be deployed, the Ceph orchestrator randomly selects the hosts and deploys the Manager daemons to them. Note If you want to apply Manager daemons to more than one specific host, be sure to specify all of the host names within the same ceph orch apply command. If you specify ceph orch apply mgr --placement host1 and then specify ceph orch apply mgr --placement host2 , the second command removes the Manager daemon on host1 and applies a Manager daemon to host2. Red Hat recommends that you use the --placement option to deploy to specific hosts. Prerequisites A running storage cluster. Procedure To specify that you want to apply a certain number of Manager daemons to randomly selected hosts: Syntax Example To add Manager daemons to specific hosts in your storage cluster: Syntax Example 3.20. Adding OSDs Cephadm will not provision an OSD on a device that is not available. A storage device is considered available if it meets all of the following conditions: The device must have no partitions. The device must not be mounted. The device must not contain a file system. The device must not contain a Ceph BlueStore OSD. The device must be larger than 5 GB. Prerequisites A running Red Hat Ceph Storage cluster. Procedure List the available devices to deploy OSDs: Syntax Example You can either deploy the OSDs on specific hosts or on all the available devices: To create an OSD from a specific device on a specific host: Syntax Example To deploy OSDs on any available and unused devices, use the --all-available-devices option. Example Note This command creates colocated WAL and DB daemons. If you want to create non-colocated daemons, do not use this command. Additional Resources For more information about drive specifications for OSDs, see the Advanced service specifications and filters for deploying OSDs section in the Red Hat Ceph Storage Operations Guide . For more information about zapping devices to clear data on devices, see the Zapping devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 3.21. Running the cephadm-clients playbook The cephadm-clients.yml playbook handles the distribution of configuration and admin keyring files to a group of Ceph clients. Note If you do not specify a configuration file when you run the playbook, the playbook will generate and distribute a minimal configuration file. By default, the generated file is located at /etc/ceph/ceph.conf . Note If you are not using the cephadm-ansible playbooks, after upgrading your Ceph cluster, you must upgrade the ceph-common package and client libraries on your client nodes. For more information, see Upgrading the Red Hat Ceph Storage cluster section in the Red Hat Ceph Storage Upgrade Guide . Prerequisites Root-level access to the Ansible administration node. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. The cephadm-ansible package is installed. The preflight playbook has been run on the initial host in the storage cluster. For more information, see Running the preflight playbook . The client_group variable must be specified in the Ansible inventory file. The [admin] group is defined in the inventory file with a node where the admin keyring is present at /etc/ceph/ceph.client.admin.keyring . Procedure Navigate to the /usr/share/cephadm-ansible directory. Run the cephadm-clients.yml playbook on the initial host in the group of clients. Use the full path name to the admin keyring on the admin host for PATH_TO_KEYRING . Optional: If you want to specify an existing configuration file to use, specify the full path to the configuration file for CONFIG-FILE . Use the Ansible group name for the group of clients for ANSIBLE_GROUP_NAME . Use the FSID of the cluster where the admin keyring and configuration files are stored for FSID . The default path for the FSID is /var/lib/ceph/ . Syntax Example After installation is complete, the specified clients in the group have the admin keyring. If you did not specify a configuration file, cephadm-ansible creates a minimal default configuration file on each client. Additional Resources For more information about admin keys, see the Ceph User Management section in the Red Hat Ceph Storage Administration Guide . 3.22. Purging the Ceph storage cluster Purging the Ceph storage cluster clears any data or connections that remain from deployments on your server. Use the cephadm rm-cluster command since Ansible is not supported. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Disable cephadm to stop all the orchestration operations to avoid deploying new daemons: Example Get the FSID of the cluster: Example Exit the cephadm shell. Example Purge the Ceph daemons from all hosts in the cluster: Syntax Example 3.23. Deploying client nodes As a storage administrator, you can deploy client nodes by running the cephadm-preflight.yml and cephadm-clients.yml playbooks. The cephadm-preflight.yml playbook configures the Ceph repository and prepares the storage cluster for bootstrapping. It also installs some prerequisites, such as podman , lvm2 , chrony , and cephadm . The cephadm-clients.yml playbook handles the distribution of configuration and keyring files to a group of Ceph clients. Note if you are not using the cephadm-ansible playbooks, after upgrading your Ceph cluster, you must upgrade the ceph-common package and client libraries on your client nodes. For more information, see Upgrading the Red Hat Ceph Storage cluster . Prerequisites Root-level access to the Ansible administration node. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. Installation of the cephadm-ansible package. The [clients] group variable must be specified in the Ansible inventory file. The [admin] group is defined in the inventory file with a node where the admin keyring is present at /etc/ceph/ceph.client.admin.keyring . Procedure As an Ansible user, navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Open and edit the hosts inventory file and add the [clients] group and clients to your inventory: Example Run the cephadm-preflight.yml playbook to install the prerequisites on the clients: Syntax Example Run the cephadm-clients.yml playbook to distribute the keyring and Ceph configuration files to a set of clients. To copy the keyring with a custom destination keyring name: Syntax Replace INVENTORY_FILE with the Ansible inventory file name. Replace FSID with the FSID of the cluster. Replace KEYRING_PATH with the full path name to the keyring on the admin host that you want to copy to the client. Optional: Replace CLIENT_GROUP_NAME with the Ansible group name for the clients to set up. Optional: Replace CEPH_CONFIGURATION_PATH with the full path to the Ceph configuration file on the admin node. Optional: Replace KEYRING_DESTINATION_PATH with the full path name of the destination where the keyring will be copied. Note If you do not specify a configuration file with the conf option when you run the playbook, the playbook generates and distributes a minimal configuration file. By default, the generated file is located at /etc/ceph/ceph.conf . Example To copy a keyring with the default destination keyring name of ceph.keyring and using the default group of clients: Syntax Verification Log into the client nodes and verify that the keyring and configuration files exist. Example | [
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches ' Red Hat Ceph Storage '",
"subscription-manager attach --pool= POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf update",
"subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms",
"dnf install cephadm-ansible",
"cd /usr/share/cephadm-ansible",
"mkdir -p inventory/staging inventory/production",
"[defaults] inventory = ./inventory/staging",
"touch inventory/staging/hosts touch inventory/production/hosts",
"NODE_NAME_1 NODE_NAME_2 [admin] ADMIN_NODE_NAME_1",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i inventory/staging/hosts PLAYBOOK.yml",
"ansible-playbook -i inventory/production/hosts PLAYBOOK.yml",
"ssh root@myhostname root@myhostname password: Permission denied, please try again.",
"echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.conf",
"systemctl restart sshd.service",
"ssh root@ HOST_NAME",
"ssh root@host01",
"ssh root@ HOST_NAME",
"ssh root@host01",
"adduser USER_NAME",
"adduser ceph-admin",
"passwd USER_NAME",
"passwd ceph-admin",
"cat << EOF >/etc/sudoers.d/ USER_NAME USDUSER_NAME ALL = (root) NOPASSWD:ALL EOF",
"cat << EOF >/etc/sudoers.d/ceph-admin ceph-admin ALL = (root) NOPASSWD:ALL EOF",
"chmod 0440 /etc/sudoers.d/ USER_NAME",
"chmod 0440 /etc/sudoers.d/ceph-admin",
"[ceph-admin@admin ~]USD ssh-keygen",
"ssh-copy-id USER_NAME @ HOST_NAME",
"[ceph-admin@admin ~]USD ssh-copy-id ceph-admin@host01",
"[ceph-admin@admin ~]USD touch ~/.ssh/config",
"Host host01 Hostname HOST_NAME User USER_NAME Host host02 Hostname HOST_NAME User USER_NAME",
"Host host01 Hostname host01 User ceph-admin Host host02 Hostname host02 User ceph-admin Host host03 Hostname host03 User ceph-admin",
"[ceph-admin@admin ~]USD chmod 600 ~/.ssh/config",
"host02 host03 host04 [admin] host01",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit GROUP_NAME | NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host01",
"cephadm bootstrap --cluster-network NETWORK_CIDR --mon-ip IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD --yes-i-know",
"cephadm bootstrap --cluster-network 10.10.128.0/24 --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1 --yes-i-know",
"Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.",
"cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --allow-fqdn-hostname --registry-json REGISTRY_JSON",
"cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --allow-fqdn-hostname --registry-json /etc/mylogin.json",
"{ \"url\":\" REGISTRY_URL \", \"username\":\" USER_NAME \", \"password\":\" PASSWORD \" }",
"{ \"url\":\"registry.redhat.io\", \"username\":\"myuser1\", \"password\":\"mypassword1\" }",
"cephadm bootstrap --mon-ip IP_ADDRESS --registry-json /etc/mylogin.json",
"cephadm bootstrap --mon-ip 10.10.128.68 --registry-json /etc/mylogin.json",
"service_type: host addr: host01 hostname: host01 --- service_type: host addr: host02 hostname: host02 --- service_type: host addr: host03 hostname: host03 --- service_type: host addr: host04 hostname: host04 --- service_type: mon placement: host_pattern: \"host[0-2]\" --- service_type: osd service_id: my_osds placement: host_pattern: \"host[1-3]\" data_devices: all: true",
"cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD",
"cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"su - SSH_USER_NAME",
"su - ceph Last login: Tue Sep 14 12:00:29 EST 2021 on pts/0",
"[ceph@host01 ~]USD ssh host01 Last login: Tue Sep 14 12:03:29 EST 2021 on pts/0",
"sudo cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD",
"sudo cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --all --matches=\"*Ceph*\"",
"subscription-manager attach --pool= POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf install -y podman httpd-tools",
"mkdir -p /opt/registry/{auth,certs,data}",
"htpasswd -bBc /opt/registry/auth/htpasswd PRIVATE_REGISTRY_USERNAME PRIVATE_REGISTRY_PASSWORD",
"htpasswd -bBc /opt/registry/auth/htpasswd myregistryusername myregistrypassword1",
"openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS: LOCAL_NODE_FQDN \"",
"openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS:admin.lab.redhat.com\"",
"ln -s /opt/registry/certs/domain.crt /opt/registry/certs/domain.cert",
"cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \" LOCAL_NODE_FQDN \"",
"cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com",
"scp /opt/registry/certs/domain.crt root@host01:/etc/pki/ca-trust/source/anchors/ ssh root@host01 update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com",
"run --restart=always --name NAME_OF_CONTAINER -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2",
"podman run --restart=always --name myprivateregistry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2",
"unqualified-search-registries = [\"registry.redhat.io\", \"registry.access.redhat.com\", \"registry.fedoraproject.org\", \"registry.centos.org\", \"docker.io\"]",
"login registry.redhat.io",
"run -v / CERTIFICATE_DIRECTORY_PATH :/certs:Z -v / CERTIFICATE_DIRECTORY_PATH /domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds RED_HAT_CUSTOMER_PORTAL_LOGIN : RED_HAT_CUSTOMER_PORTAL_PASSWORD --dest-cert-dir=./certs/ --dest-creds PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD docker://registry.redhat.io/ SRC_IMAGE : SRC_TAG docker:// LOCAL_NODE_FQDN :5000/ DST_IMAGE : DST_TAG",
"podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/rhceph-8-rhel9:latest docker://admin.lab.redhat.com:5000/rhceph/rhceph-8-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-node-exporter:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/grafana-rhel9:latest docker://admin.lab.redhat.com:5000/rhceph/grafana-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-alertmanager:v4.12",
"curl -u PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD https:// LOCAL_NODE_FQDN :5000/v2/_catalog",
"curl -u myregistryusername:myregistrypassword1 https://admin.lab.redhat.com:5000/v2/_catalog {\"repositories\":[\"openshift4/ose-prometheus\",\"openshift4/ose-prometheus-alertmanager\",\"openshift4/ose-prometheus-node-exporter\",\"rhceph/rhceph-8-dashboard-rhel9\",\"rhceph/rhceph-8-rhel9\"]}",
"host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\"",
"ansible-playbook -vvv -i INVENTORY_HOST_FILE_ cephadm-set-container-insecure-registries.yml -e insecure_registry= REGISTRY_URL",
"ansible-playbook -vvv -i hosts cephadm-set-container-insecure-registries.yml -e insecure_registry=host01:5050",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \" --limit GROUP_NAME | NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit host02",
"cephadm --image PRIVATE_REGISTRY_NODE_FQDN :5000/ CUSTOM_IMAGE_NAME : IMAGE_TAG bootstrap --mon-ip IP_ADDRESS --registry-url PRIVATE_REGISTRY_NODE_FQDN :5000 --registry-username PRIVATE_REGISTRY_USERNAME --registry-password PRIVATE_REGISTRY_PASSWORD",
"cephadm --image admin.lab.redhat.com:5000/rhceph-8-rhel9:latest bootstrap --mon-ip 10.10.128.68 --registry-url admin.lab.redhat.com:5000 --registry-username myregistryusername --registry-password myregistrypassword1",
"Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.",
"ceph cephadm registry-login --registry-url CUSTOM_REGISTRY_NAME --registry_username REGISTRY_USERNAME --registry_password REGISTRY_PASSWORD",
"ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1",
"ceph config set mgr mgr/cephadm/ OPTION_NAME CUSTOM_REGISTRY_NAME / CONTAINER_NAME",
"container_image_prometheus container_image_grafana container_image_alertmanager container_image_node_exporter",
"ceph config set mgr mgr/cephadm/container_image_prometheus myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_grafana myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_alertmanager myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_node_exporter myregistry/mycontainer",
"ceph orch redeploy node-exporter",
"ceph config rm mgr mgr/cephadm/ OPTION_NAME",
"ceph config rm mgr mgr/cephadm/container_image_prometheus",
"[ansible@admin ~]USD cd /usr/share/cephadm-ansible",
"ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user= USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node= ADMIN_NODE_NAME_1",
"[ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01 [ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node=host01",
"cephadm shell ceph -s",
"cephadm shell ceph -s",
"exit",
"podman ps",
"cephadm shell ceph -s cluster: id: f64f341c-655d-11eb-8778-fa163e914bcc health: HEALTH_OK services: mon: 3 daemons, quorum host01,host02,host03 (age 94m) mgr: host01.lbnhug(active, since 59m), standbys: host02.rofgay, host03.ohipra mds: 1/1 daemons up, 1 standby osd: 18 osds: 18 up (since 10m), 18 in (since 10m) rgw: 4 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 230 objects, 9.9 KiB usage: 271 MiB used, 269 GiB / 270 GiB avail pgs: 225 active+clean io: client: 85 B/s rd, 0 op/s rd, 0 op/s wr",
".Syntax [source,subs=\"verbatim,quotes\"] ---- ceph cephadm registry-login --registry-url _CUSTOM_REGISTRY_NAME_ --registry_username _REGISTRY_USERNAME_ --registry_password _REGISTRY_PASSWORD_ ----",
".Example ---- ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1 ----",
"ssh-copy-id -f -i /etc/ceph/ceph.pub user@ NEWHOST",
"ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"[ceph-admin@admin ~]USD cat hosts host02 host03 host04 [admin] host01",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"ceph orch host add NEWHOST",
"ceph orch host add host02 Added host 'host02' with addr '10.10.128.69' ceph orch host add host03 Added host 'host03' with addr '10.10.128.70'",
"ceph orch host add HOSTNAME IP_ADDRESS",
"ceph orch host add host02 10.10.128.69 Added host 'host02' with addr '10.10.128.69'",
"ceph orch host ls",
"ceph orch host add HOSTNAME IP_ADDR",
"ceph orch host add host01 10.10.128.68",
"ceph orch host set-addr HOSTNAME IP_ADDR",
"ceph orch host set-addr HOSTNAME IPV4_ADDRESS",
"service_type: host addr: hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: hostname: host03 labels: - mon - osd - mgr --- service_type: host addr: hostname: host04 labels: - mon - osd",
"ceph orch apply -i hosts.yaml Added host 'host02' with addr '10.10.128.69' Added host 'host03' with addr '10.10.128.70' Added host 'host04' with addr '10.10.128.71'",
"cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml",
"ceph orch host ls HOST ADDR LABELS STATUS host02 host02 mon osd mgr host03 host03 mon osd mgr host04 host04 mon osd",
"cephadm shell",
"ceph orch host add HOST_NAME HOST_ADDRESS",
"ceph orch host add host03 10.10.128.70",
"cephadm shell",
"ceph orch host ls",
"ceph orch host drain HOSTNAME",
"ceph orch host drain host02",
"ceph orch osd rm status",
"ceph orch ps HOSTNAME",
"ceph orch ps host02",
"ceph orch host rm HOSTNAME",
"ceph orch host rm host02",
"cephadm shell",
"ceph orch host label add HOSTNAME LABEL",
"ceph orch host label add host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label rm HOSTNAME LABEL",
"ceph orch host label rm host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls HOST ADDR LABELS STATUS host01 _admin mon osd mgr host02 mon osd mgr mylabel",
"ceph orch apply DAEMON --placement=\"label: LABEL \"",
"ceph orch apply prometheus --placement=\"label:mylabel\"",
"vi placement.yml",
"service_type: prometheus placement: label: \"mylabel\"",
"ceph orch apply -i FILENAME",
"ceph orch apply -i placement.yml Scheduled prometheus update...",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=prometheus NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID prometheus.host02 host02 *:9095 running (2h) 8m ago 2h 85.3M - 2.22.2 ac25aac5d567 ad8c7593d7c0",
"ceph orch apply mon 5",
"ceph orch apply mon --unmanaged",
"ceph orch host label add HOSTNAME mon",
"ceph orch host label add host01 mon",
"ceph orch host ls",
"ceph orch host label add host02 mon ceph orch host label add host03 mon ceph orch host ls HOST ADDR LABELS STATUS host01 mon host02 mon host03 mon host04 host05 host06",
"ceph orch apply mon label:mon",
"ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3",
"ceph orch apply mon host01,host02,host03",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm generate-key",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key",
"[ceph-admin@admin cephadm-ansible]USDceph cephadm clear-key",
"[ceph-admin@admin cephadm-ansible]USD ceph mgr fail",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user <user>",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user user",
"ceph cephadm get-pub-key > ~/ceph.pub",
"[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key > ~/ceph.pub",
"ssh-copy-id -f -i ~/ceph.pub USER @ HOST",
"[ceph-admin@admin cephadm-ansible]USD ssh-copy-id ceph-admin@host01",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr host04 host05 host06",
"ceph orch host label add HOSTNAME _admin",
"ceph orch host label add host03 _admin",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr,_admin host04 host05 host06",
"ceph orch host label add HOSTNAME mon",
"ceph orch host label add host02 mon ceph orch host label add host03 mon",
"ceph orch host ls",
"ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon host04 host05 host06",
"ceph orch apply mon label:mon",
"ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3",
"ceph orch apply mon host01,host02,host03",
"ceph orch apply mon NODE:IP_ADDRESS_OR_NETWORK_NAME [ NODE:IP_ADDRESS_OR_NETWORK_NAME ...]",
"ceph orch apply mon host02:10.10.128.69 host03:mynetwork",
"ceph orch apply mgr NUMBER_OF_DAEMONS",
"ceph orch apply mgr 3",
"ceph orch apply mgr --placement \" HOSTNAME1 HOSTNAME2 HOSTNAME3 \"",
"ceph orch apply mgr --placement \"host02 host03 host04\"",
"ceph orch device ls [--hostname= HOSTNAME1 HOSTNAME2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch daemon add osd HOSTNAME : DEVICE_PATH",
"ceph orch daemon add osd host02:/dev/sdb",
"ceph orch apply osd --all-available-devices",
"ansible-playbook -i hosts cephadm-clients.yml -extra-vars '{\"fsid\":\" FSID \", \"client_group\":\" ANSIBLE_GROUP_NAME \", \"keyring\":\" PATH_TO_KEYRING \", \"conf\":\" CONFIG_FILE \"}'",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"be3ca2b2-27db-11ec-892b-005056833d58\",\"client_group\":\"fs_clients\",\"keyring\":\"/etc/ceph/fs.keyring\", \"conf\": \"/etc/ceph/ceph.conf\"}'",
"ceph mgr module disable cephadm",
"ceph fsid",
"exit",
"cephadm rm-cluster --force --zap-osds --fsid FSID",
"cephadm rm-cluster --force --zap-osds --fsid a6ca415a-cde7-11eb-a41a-002590fc2544",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"host02 host03 host04 [admin] host01 [clients] client01 client02 client03",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit CLIENT_GROUP_NAME | CLIENT_NODE_NAME",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --limit clients",
"ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"client_group\":\" CLIENT_GROUP_NAME \",\"conf\":\" CEPH_CONFIGURATION_PATH \",\"keyring_dest\":\" KEYRING_DESTINATION_PATH \"}'",
"[ceph-admin@host01 cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"266ee7a8-2a05-11eb-b846-5254002d4916\",\"keyring\":\"/etc/ceph/ceph.client.admin.keyring\",\"client_group\":\"clients\",\"conf\":\"/etc/ceph/ceph.conf\",\"keyring_dest\":\"/etc/ceph/custom.name.ceph.keyring\"}'",
"ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"conf\":\" CONF_PATH \"}'",
"ls -l /etc/ceph/ -rw-------. 1 ceph ceph 151 Jul 11 12:23 custom.name.ceph.keyring -rw-------. 1 ceph ceph 151 Jul 11 12:23 ceph.keyring -rw-------. 1 ceph ceph 269 Jul 11 12:23 ceph.conf"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/installation_guide/red-hat-ceph-storage-installation |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_the_resp_protocol_endpoint_with_data_grid/rhdg-docs_datagrid |
5.2. The Virtual Machine Manager Interface | 5.2. The Virtual Machine Manager Interface The following sections provide information about the Virtual Machine Manager user interface. The user interface includes The Virtual Machine Manager main window and The Virtual Machine window . 5.2.1. The Virtual Machine Manager Main Window This following figure shows the Virtual Machine Manager main window interface. Figure 5.2. The Virtual Machine Manager window The Virtual Machine Manager main window title bar displays Virtual Machine Manager . 5.2.1.1. The main window menu bar The following table lists the entries in the Virtual Machine Manager main window menus. Table 5.1. Virtual Machine Manager main window menus Menu name Menu item Description File Add Connection Opens the Add Connection dialog to connect to a local or remote hypervisor. For more information, see Adding a Remote Connection in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. New Virtual Machine Opens the New VM wizard to create a new guest virtual machine. For more information, see Creating Guests with virt-manager in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. Close Closes the Virtual Machine Manager window without closing any Virtual Machine windows. Running virtual machines are not stopped. Exit Closes the Virtual Machine Manager and all Virtual Machine windows. Running virtual machines are not stopped. Edit Connection Details Opens the Connection Details window for the selected connection. Virtual Machine Details Opens the Virtual Machine window for the selected virtual machine. For more information, see The Virtual Machine pane . Delete Deletes the selected connection or virtual machine. Preferences Opens the Preferences dialog box for configuring Virtual Machine Manager options. View Graph Guest CPU Usage Host CPU Usage Memory Usage Disk I/O Network I/O Toggles displays of the selected parameter for the virtual machines in the Virtual Machine Manager main window. Help About Displays the About window with information about the Virtual Machine Manager. 5.2.1.2. The main window toolbar The following table lists the icons in the Virtual Machine Manager main window. Table 5.2. Virtual Machine Manager main window toolbar Icon Description Opens the New VM wizard to create a new guest virtual machine. Opens the Virtual Machine window for the selected virtual machine. Starts the selected virtual machine. Pauses the selected virtual machine. Stops the selected virtual machine. Opens a menu to select one of the following actions to perform on the selected virtual machine: Reboot - Reboots the selected virtual machine. Shut Down - Shuts down the selected virtual machine. Force Reset - Forces the selected virtual machine to shut down and restart. Force Off - Forces the selected virtual machine to shut down. Save - Saves the state of the selected virtual machine to a file. For more information, see Saving a Guest Virtual Machine's Configuration in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. 5.2.1.3. The Virtual Machine list The virtual machine list displays a list of virtual machines to which the Virtual Machine Manager is connected. The virtual machines in the list are grouped by connection. You can sort the list by clicking on the header of a table column. Figure 5.3. The Virtual Machine list The virtual machine list displays graphs with information about the resources being used by each virtual machine. You make resources available for display from the Polling tab of the Preferences dialog in the Edit menu. The following is a list of the resources that can be displayed in the virtual machine list: CPU usage Host CPU usage Memory usage Disk I/O Network I/O You can select the resources to display using the Graph menu item in the View menu. 5.2.2. The Virtual Machine Window This section provides information about the Virtual Machine window interface. Figure 5.4. The Virtual Machine window The title bar displays the name of the virtual machine and the connection that it uses. 5.2.2.1. The Virtual Machine window menu bar The following table lists the entries in the Virtual Machine window menus. Table 5.3. Virtual Machine window menus Menu name Menu item Description File View Manager Opens the main Virtual Machine Manager window. Close Closes only the Virtual Machine window without stopping the virtual machine. Exit Closes the all Virtual Machine Manager windows. Running virtual machines are not stopped. Virtual Machine Run Runs the virtual machine. This option is only available if the virtual machine is not running. Pause Pauses the virtual machine. This option is only available if the virtual machine is already running. Shut Down Opens a menu to select one of the following actions to perform on the virtual machine: Reboot - Reboots the virtual machine. Shut Down - Shuts down the virtual machine. Force Reset - Forces the virtual machine to shut down and restart. Force Off - Forces the virtual machine to shut down. Save - Saves the state of the virtual machine to a file. Clone Creates a clone of the virtual machine. For more information, see Cloning Guests with virt-manager in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. Migrate Opens the Migrate the virtual machine dialog to migrate the virtual machine to a different host. For more information, see Migrating with virt-manager in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. Delete Deletes the virtual machine. Take Screenshot Takes a screenshot of the virtual machine console. Redirect USB Device Opens the Select USB devices for redirection dialog to select USB devices to redirect. For more information, see USB Redirection in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. View Console Opens the Console display in the Virtual Machine pane. Details Opens the Details display in the Virtual Machine pane. For more information, see The virtual machine details window . Snapshots Opens the Snapshots display in the Virtual Machine pane. For more information, see The snapshots window . Fullscreen Displays the virtual machine console in full screen mode. Resize to VM Resizes the display on the full screen to the size and resolution configured for the virtual machine. Scale Display Scales the display of the virtual machine based on the selection of the following sub-menu items: Always - The display of the virtual machine is always scaled to the Virtual Machine window. Only when Fullscreen - The display of the virtual machine is only scaled to the Virtual Machine window when the Virtual Machine window is in Full screen mode.. Never - The display of the virtual machine is never scaled to the Virtual Machine window. Auto resize VM with window - The display of the virtual machine resizes automatically when the Virtual Machine window is resized. Text Consoles Displays the virtual machine display selected in the list. Examples of virtual machine displays include Serial 1 and Graphical Console Spice . Toolbar Toggles the display of the Virtual Machine window toolbar. Send Key Ctrl+Alt+Backspace Ctrl+Alt+Delete Ctrl+Alt+F1 Ctrl+Alt+F2 Ctrl+Alt+F3 Ctrl+Alt+F4 Ctrl+Alt+F5 Ctrl+Alt+F6 Ctrl+Alt+F7 Ctrl+Alt+F8 Ctrl+Alt+F9 Ctrl+Alt+F10 Ctrl+Alt+F11 Ctrl+Alt+F12 Ctrl+Alt+Printscreen Sends the selected key to the virtual machine. 5.2.2.2. The Virtual Machine window toolbar The following table lists the icons in the Virtual Machine window. Table 5.4. Virtual Machine window toolbar Icon Description Displays the graphical console for the virtual machine. Displays the details pane for the virtual machine. Starts the selected virtual machine. Pauses the selected virtual machine. Stops the selected virtual machine. Opens a menu to select one of the following actions to perform on the selected virtual machine: Reboot - Reboots the selected virtual machine. Shut Down - Shuts down the selected virtual machine. Force Reset - Forces the selected virtual machine to shut down and restart. Force Off - Forces the selected virtual machine to shut down. Save - Saves the state of the selected virtual machine to a file. Opens the Snapshots display in the Virtual Machine pane. Displays the virtual machine console in full screen mode. 5.2.2.3. The Virtual Machine pane The Virtual Machine pane displays one of the following: The virtual machine console The virtual machine details window The snapshots window The virtual machine console The virtual machine console shows the graphical output of the virtual machine. Figure 5.5. The Virtual Machine console You can interact with the virtual machine console using the mouse and keyboard in the same manner you interact with a real machine. The display in the virtual machine console reflects the activities being performed on the virtual machine. The virtual machine details window The virtual machine details window provides detailed information about the virtual machine, its hardware and configuration. Figure 5.6. The Virtual Machine details window The virtual machine details window includes a list of virtual machine parameters. When a parameter in the list is selected, information about the selected parameter appear on the right side of the virtual machine details window. You can also add and configure hardware using the virtual machine details window. For more information on the virtual machine details window, see The Virtual Hardware Details Window in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. The snapshots window The virtual machine snapshots window provides a list of snapshots created for the virtual machine. Figure 5.7. The Virtual Machine snapshots window The virtual machine snapshots window includes a list of snapshots saved for the virtual machine. When a snapshot in the list is selected, details about the selected snapshot, including its state, description, and a screenshot, appear on the right side of the virtual machine snapshots window. You can add, delete, and run snapshots using the virtual machine snapshots window. For more information about managing snapshots, see Managing Snapshots in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/virt-manager-user-interface-description |
Chapter 19. Managing Guests with the Virtual Machine Manager (virt-manager) | Chapter 19. Managing Guests with the Virtual Machine Manager (virt-manager) This chapter describes the Virtual Machine Manager ( virt-manager ) windows, dialog boxes, and various GUI controls. virt-manager provides a graphical view of hypervisors and guests on your host system and on remote host systems. virt-manager can perform virtualization management tasks, including: defining and creating guests, assigning memory, assigning virtual CPUs, monitoring operational performance, saving and restoring, pausing and resuming, and shutting down and starting guests, links to the textual and graphical consoles, and live and offline migrations. Important It is important to note which user you are using. If you create a guest virtual machine with one user, you will not be able to retrieve information about it using another user. This is especially important when you create a virtual machine in virt-manager. The default user is root in that case unless otherwise specified. Should you have a case where you cannot list the virtual machine using the virsh list --all command, it is most likely due to you running the command using a different user than you used to create the virtual machine. 19.1. Starting virt-manager To start virt-manager session open the Applications menu, then the System Tools menu and select Virtual Machine Manager ( virt-manager ). The virt-manager main window appears. Figure 19.1. Starting virt-manager Alternatively, virt-manager can be started remotely using ssh as demonstrated in the following command: Using ssh to manage virtual machines and hosts is discussed further in Section 18.2, "Remote Management with SSH" . | [
"ssh -X host's address virt-manager"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-managing_guests_with_the_virtual_machine_manager_virt_manager |
Chapter 5. Configuring Network Connection Settings | Chapter 5. Configuring Network Connection Settings This chapter describes various configurations of the network connection settings and shows how to configure them by using NetworkManager . 5.1. Configuring 802.3 Link Settings You can configure the 802.3 link settings of an Ethernet connection by modifying the following configuration parameters: 802-3-ethernet.auto-negotiate 802-3-ethernet.speed 802-3-ethernet.duplex You can configure the 802.3 link settings to three main modes: Ignore link negotiation Enforce auto-negotiation activation Manually set the speed and duplex link settings Ignoring link negotiation In this case, NetworkManager ignores link configuration for an ethernet connection, keeping the already configuration on the device. To ignore link negotiation, set the following parameters: Important If the auto-negotiate parameter is set to no , but the speed and duplex values are not set, that does not mean that auto-negotiation is disabled. Enforcing auto-negotiation activation In this case, NetworkManager enforces auto-negotiation on a device. To enforce auto-negotiation activation, set the following options: Manually setting the link speed and duplex In this case, you can manually configure the speed and duplex settings on the link. To manually set the speed and duplex link settings, set the aforementioned parameters as follows: Important Make sure to set both the speed and the duplex values, otherwise NetworkManager does not update the link configuration. As a system administrator, you can configure 802.3 link settings using one of the following options: the nmcli tool the nm-connection-editor utility Configuring 802.3 Link Settings with the nmcli Tool Procedure Create a new ethernet connection for the enp1s0 device. Set the 802.3 link setting to a configuration of your choice. For details, see Section 5.1, "Configuring 802.3 Link Settings" For example, to manually set the speed option 100 Mbit/s and duplex to full : Configuring 802.3 Link Settings with nm-connection-editor Procedure Enter nm-connection-editor in a terminal. Select the ethernet connection you want to edit and click the gear wheel icon to move to the editing dialog. See Section 3.4.3, "Common Configuration Options Using nm-connection-editor" for more information. Select the link negotiation of your choice. Ignore : link configuration is skipped (default). Automatic : link auto-negotiation is enforced on the device. Manual : the Speed and Duplex options can be specified to enforce the link negotiation. Figure 5.1. Configure 802.3 link settings using nm-connection-editor | [
"802-3-ethernet.auto-negotiate = no 802-3-ethernet.speed = 0 802-3-ethernet.duplex = NULL",
"802-3-ethernet.auto-negotiate = yes 802-3-ethernet.speed = 0 802-3-ethernet.duplex = NULL",
"802-3-ethernet.auto-negotiate = no 802-3-ethernet.speed = [speed in Mbit/s] 802-3-ethernet.duplex = [half |full]",
"nmcli connection add con-name MyEthernet type ethernet ifname enp1s0 802-3-ethernet.auto-negotiate no 802-3-ethernet.speed 100 802-3-ethernet.duplex full"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-Configuring_Network_Connection_Settings |
Service Mesh | Service Mesh OpenShift Container Platform 4.13 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team | [
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true",
"spec: meshConfig discoverySelectors: - matchLabels: env: prod region: us-east1 - matchExpressions: - key: app operator: In values: - cassandra - spark",
"spec: meshConfig: extensionProviders: - name: prometheus prometheus: {} --- apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics spec: metrics: - providers: - name: prometheus",
"spec: techPreview: gatewayAPI: enabled: true",
"spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: cluster-wide namespace: istio-system spec: version: v2.3 techPreview: controlPlaneMode: ClusterScoped 1",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - '*' 1",
"kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0\" | kubectl apply -f -; }",
"spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"",
"apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway spec: addresses: - value: ingress.istio-gateways.svc.cluster.local type: Hostname",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: \"false\"",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]",
"spec: techPreview: global: pathNormalization: <option>",
"oc create -f <myEnvoyFilterFile>",
"apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: \"envoy.filters.network.http_connection_manager\" subFilter: name: \"envoy.filters.http.router\" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: \"@type\": \"type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(\":path\") request_handle:headers():replace(\":path\", string.lower(path)) end",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled gateways: ingress: enabled: true",
"label namespace istio-system istio-discovery=enabled",
"2023-05-02T15:20:42.541034Z error watch error in cluster Kubernetes: failed to list *v1alpha2.TLSRoute: the server could not find the requested resource (get tlsroutes.gateway.networking.k8s.io) 2023-05-02T15:20:42.616450Z info kube controller \"gateway.networking.k8s.io/v1alpha2/TCPRoute\" is syncing",
"kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v0.5.1\" | kubectl apply -f -; }",
"apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: mesh-wide-concurrency namespace: <istiod-namespace> spec: concurrency: 0",
"api: namespaces: exclude: - \"^istio-operator\" - \"^kube-.*\" - \"^openshift.*\" - \"^ibm.*\" - \"^kiali-operator\"",
"spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020",
"spec: runtime: components: pilot: container: env: APPLY_WASM_PLUGINS_TO_INBOUND_ONLY: \"true\"",
"error Installer exits with open /host/etc/cni/multus/net.d/v2-2-istio-cni.kubeconfig.tmp.841118073: no such file or directory",
"oc label namespace istio-system maistra.io/ignore-namespace-",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true",
"An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type \"Mixer\" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type \"Mixer\" and telemetry.Mixer options have been removed in v2.1, please use another alternative]\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.5",
"oc project istio-system",
"oc get smcp -o yaml",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.5",
"oc get smcp -o yaml",
"oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. oc replace -f smcp-resource.yaml",
"oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{\"op\": \"replace\",\"path\":\"/spec/path/to/bad/setting\",\"value\":\"corrected-value\"}]'",
"oc edit smcp.v1.maistra.io <smcp_name>",
"oc project istio-system",
"oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml",
"oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml",
"oc new-project istio-system-upgrade",
"oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml",
"spec: policy: type: Mixer",
"spec: telemetry: type: Mixer",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage",
"apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" jwtHeaders: - \"x-goog-iap-jwt-assertion\" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN",
"#require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage jwtRules: - issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" fromHeaders: - name: \"x-goog-iap-jwt-assertion\" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - \"*\" requestPrincipals: - \"*\" - to: # no JWT token required to access health_check - operation: paths: - /health_check",
"spec: tracing: sampling: 100 # 1% type: Jaeger",
"spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: \"100G\" storageClassName: \"storageclass\" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: \"1Gi\" cpu: \"500m\" limits: memory: \"1Gi\"",
"spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install",
"oc rollout restart <deployment>",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system",
"oc -n istio-system edit smcp <name> 1",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80",
"oc edit deployment -n <namespace> <deploymentName>",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - \"allowed.*\" selector: matchLabels: app: httpbin",
"oc -n openshift-operators get subscriptions",
"oc -n openshift-operators edit subscription <name> 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/servicemeshoperator.openshift-operators: \"\" name: servicemeshoperator namespace: openshift-operators spec: config: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n openshift-operators get po -l name=istio-operator -owide",
"oc new-project istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.5 tracing: type: None sampling: 10000 addons: kiali: enabled: true name: kiali grafana: enabled: true",
"oc create -n istio-system -f <istio_installation.yaml>",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.5.2 66m",
"spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: \"\" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n istio-system edit smcp <name> 1",
"spec: runtime: defaults: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n istio-system edit smcp <name> 1",
"spec: runtime: components: pilot: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"spec: gateways: ingress: runtime: pod: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved egress: runtime: pod: nodeSelector: 3 node-role.kubernetes.io/infra: \"\" tolerations: 4 - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc -n istio-system get pods -owide",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.5 mode: ClusterWide",
"oc new-project istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.5 mode: ClusterWide",
"oc create -n istio-system -f <istio_installation.yaml>",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project <your-project>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system default",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: my-application spec: controlPlaneRef: namespace: istio-system name: basic",
"oc apply -f <file-name>",
"oc get smm default -n my-application",
"NAME CONTROL PLANE READY AGE default istio-system/basic True 2m11s",
"oc describe smmr default -n istio-system",
"Name: default Namespace: istio-system Labels: <none> Status: Configured Members: default my-application Members: default my-application",
"oc edit smmr default -n istio-system",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: memberSelectors: 1 - matchLabels: 2 mykey: myvalue 3 - matchLabels: 4 myotherkey: myothervalue 5",
"oc new-project bookinfo",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system -o wide",
"NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/platform/kube/bookinfo.yaml",
"service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/bookinfo-gateway.yaml",
"gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/destination-rule-all.yaml",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/destination-rule-all-mtls.yaml",
"destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created",
"oc get pods -n bookinfo",
"NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc delete project bookinfo",
"oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'",
"oc get deployment -n <namespace>",
"get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'",
"oc apply -n <namespace> -f deployment.yaml",
"oc apply -n bookinfo -f deployment-ratings-v1.yaml",
"oc get deployment -n <namespace> <deploymentName> -o yaml",
"oc get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"",
"oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'",
"oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic",
"oc policy add-role-to-user",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.5 security: dataPlane: mtls: true",
"apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT",
"oc create -n <namespace> -f <policy.yaml>",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: \"*.<namespace>.svc.cluster.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL",
"oc create -n <namespace> -f <destination-rule.yaml>",
"kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: [\"1.2.3.4\"]",
"oc create -n istio-system -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: [\"bookinfo\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {}",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {}",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: [\"1.2.3.4\", \"5.6.7.0/24\"]",
"apiVersion: \"security.istio.io/v1beta1\" kind: \"RequestAuthentication\" metadata: name: \"jwt-example\" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: \"http://localhost:8080/auth/realms/master\" jwksUri: \"http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs\"",
"apiVersion: \"security.istio.io/v1beta1\" kind: \"AuthorizationPolicy\" metadata: name: \"frontend-ingress\" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: [\"*\"]",
"oc edit smcp <smcp-name>",
"spec: security: dataPlane: mtls: true # enable mtls for data plane # JWKSResolver extra CA # PEM-encoded certificate content to trust an additional CA jwksResolverCA: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----",
"kind: ConfigMap apiVersion: v1 data: extra.pem: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----",
"oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts",
"oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)'",
"oc -n bookinfo delete pods --all",
"pod \"details-v1-6cd699df8c-j54nh\" deleted pod \"productpage-v1-5ddcb4b84f-mtmf2\" deleted pod \"ratings-v1-bdbcc68bc-kmng4\" deleted pod \"reviews-v1-754ddd7b6f-lqhsv\" deleted pod \"reviews-v2-675679877f-q67r2\" deleted pod \"reviews-v3-79d7549c7-c2gjs\" deleted",
"oc get pods -n bookinfo",
"sleep 60 oc -n bookinfo exec \"USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})\" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > \"proxy-cert-\" counter \".pem\"}' < certs.pem",
"openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt",
"openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt",
"diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt",
"openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt",
"openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt",
"diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt",
"openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem",
"oc delete secret cacerts -n istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-root-issuer namespace: cert-manager spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: root-ca namespace: cert-manager spec: isCA: true duration: 21600h # 900d secretName: root-ca commonName: root-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: selfsigned-root-issuer kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: root-ca spec: ca: secretName: root-ca",
"oc apply -f cluster-issuer.yaml",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 21600h secretName: istio-ca commonName: istio-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: root-ca kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-ca",
"oc apply -n istio-system -f istio-ca.yaml",
"helm install istio-csr jetstack/cert-manager-istio-csr -n istio-system -f deploy/examples/cert-manager/istio-csr/istio-csr.yaml",
"replicaCount: 2 image: repository: quay.io/jetstack/cert-manager-istio-csr tag: v0.6.0 pullSecretName: \"\" app: certmanager: namespace: istio-system issuer: group: cert-manager.io kind: Issuer name: istio-ca controller: configmapNamespaceSelector: \"maistra.io/member-of=istio-system\" leaderElectionNamespace: istio-system istio: namespace: istio-system revisions: [\"basic\"] server: maxCertificateDuration: 5m tls: certificateDNSNames: # This DNS name must be set in the SMCP spec.security.certificateAuthority.cert-manager.address - cert-manager-istio-csr.istio-system.svc",
"oc apply -f mesh.yaml -n istio-system",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: grafana: enabled: false kiali: enabled: false prometheus: enabled: false proxy: accessLogging: file: name: /dev/stdout security: certificateAuthority: cert-manager: address: cert-manager-istio-csr.istio-system.svc:443 type: cert-manager dataPlane: mtls: true identity: type: ThirdParty tracing: type: None --- apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - httpbin - sleep",
"oc new-project <namespace>",
"oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin.yaml",
"oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/sleep/sleep.yaml",
"oc exec \"USD(oc get pod -l app=sleep -n <namespace> -o jsonpath={.items..metadata.name})\" -c sleep -n <namespace> -- curl http://httpbin.<namespace>:8000/ip -s -o /dev/null -w \"%{http_code}\\n\"",
"200",
"oc apply -n <namespace> -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin-gateway.yaml",
"INGRESS_HOST=USD(oc -n istio-system get routes istio-ingressgateway -o jsonpath='{.spec.host}')",
"curl -s -I http://USDINGRESS_HOST/headers -o /dev/null -w \"%{http_code}\" -s",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy",
"apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-ingress spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: istio: ingressgateway sidecar.istio.io/inject: \"true\" 1 spec: containers: - name: istio-proxy image: auto 2",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: istio-ingress rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: istio-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: gatewayingress namespace: istio-ingress spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: istio-ingress spec: maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: istio-ingress spec: minAvailable: 1 selector: matchLabels: istio: ingressgateway",
"oc get svc istio-ingressgateway -n istio-system",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"",
"oc apply -f gateway.yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080",
"oc apply -f vs.yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')",
"curl -s -I \"USDGATEWAY_URL/productpage\"",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com",
"oc -n istio-system get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None",
"apiVersion: maistra.io/v1alpha1 kind: ServiceMeshControlPlane metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false",
"apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3",
"oc apply -f <VirtualService.yaml>",
"spec: hosts:",
"spec: http: - match:",
"spec: http: - match: - destination:",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false",
"apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - \"./*\" - \"istio-system/*\"",
"oc apply -f sidecar.yaml",
"oc get sidecar",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/virtual-service-all-v1.yaml",
"oc get virtualservices -o yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml",
"oc get virtualservice reviews -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway-canary namespace: istio-system 1 spec: selector: matchLabels: app: istio-ingressgateway istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: 2 app: istio-ingressgateway istio: ingressgateway sidecar.istio.io/inject: \"true\" spec: containers: - name: istio-proxy image: auto serviceAccountName: istio-ingressgateway --- apiVersion: v1 kind: ServiceAccount metadata: name: istio-ingressgateway namespace: istio-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: secret-reader namespace: istio-system rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-secret-reader namespace: istio-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: secret-reader subjects: - kind: ServiceAccount name: istio-ingressgateway --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy 3 metadata: name: gatewayingress namespace: istio-system spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress",
"oc scale -n istio-system deployment/<new_gateway_deployment> --replicas <new_number_of_replicas>",
"oc scale -n istio-system deployment/<old_gateway_deployment> --replicas <new_number_of_replicas>",
"oc label service -n istio-system istio-ingressgateway app.kubernetes.io/managed-by-",
"oc patch service -n istio-system istio-ingressgateway --type='json' -p='[{\"op\": \"remove\", \"path\": \"/metadata/ownerReferences\"}]'",
"oc patch smcp -n istio-system <smcp_name> --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/gateways/ingress/enabled\", \"value\": false}]'",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: false",
"kind: Route apiVersion: route.openshift.io/v1 metadata: name: example-gateway namespace: istio-system 1 spec: host: www.example.com to: kind: Service name: istio-ingressgateway 2 weight: 100 port: targetPort: http2 wildcardPolicy: None",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project istio-system",
"oc get routes",
"NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect",
"curl \"http://USDGATEWAY_URL/productpage\"",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: bookinfo 1 annotations: sidecar.istio.io/inject: 'true' 2 spec: mode: deployment config: | receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: otlp: endpoint: \"tempo-sample-distributor.tracing-system.svc.cluster.local:4317\" 3 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp]",
"oc logs -n bookinfo -l app.kubernetes.io/name=otel-collector",
"kind: ServiceMeshControlPlane apiVersion: maistra.io/v2 metadata: name: basic namespace: istio-system spec: addons: grafana: enabled: false kiali: enabled: true prometheus: enabled: true meshConfig: extensionProviders: - name: otel opentelemetry: port: 4317 service: otel-collector.bookinfo.svc.cluster.local policy: type: Istiod telemetry: type: Istiod version: v2.6",
"spec: tracing: type: None",
"apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100",
"apiVersion: kiali.io/v1alpha1 kind: Kiali spec: external_services: tracing: query_timeout: 30 1 enabled: true in_cluster_url: 'http://tempo-sample-query-frontend.tracing-system.svc.cluster.local:16685' url: '[Tempo query frontend Route url]' use_grpc: true 2",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: tempo namespace: tracing-system-mtls spec: host: \"*.tracing-system-mtls.svc.cluster.local\" trafficPolicy: tls: mode: DISABLE",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali.istio-system.svc.cluster.local trafficPolicy: tls: mode: DISABLE",
"spec: addons: jaeger: name: distr-tracing-production",
"spec: tracing: sampling: 100",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kiali-monitoring-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view subjects: - kind: ServiceAccount name: kiali-service-account namespace: istio-system",
"apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: prometheus: auth: type: bearer use_kiali_token: true query_scope: mesh_id: \"basic-istio-system\" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091",
"apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: istio: config_map_name: istio-<smcp-name> istio_sidecar_injector_config_map_name: istio-sidecar-injector-<smcp-name> istiod_deployment_name: istiod-<smcp-name> url_service_version: 'http://istiod-<smcp-name>.istio-system:15014/version' prometheus: auth: token: secret:thanos-querier-web-token:token type: bearer use_kiali_token: false query_scope: mesh_id: \"basic-istio-system\" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 version: v1.65",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: addons: prometheus: enabled: false 1 grafana: enabled: false 2 kiali: name: kiali-user-workload-monitoring meshConfig: extensionProviders: - name: prometheus prometheus: {}",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: user-workload-access namespace: istio-system 1 spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress",
"apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics namespace: istio-system 1 spec: selector: 2 matchLabels: app: bookinfo metrics: - providers: - name: prometheus",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: istiod-monitor namespace: istio-system 1 spec: targetLabels: - app selector: matchLabels: istio: pilot endpoints: - port: http-monitoring interval: 30s relabelings: - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: istio-proxies-monitor namespace: istio-system 1 spec: selector: matchExpressions: - key: istio-prometheus-ignore operator: DoesNotExist podMetricsEndpoints: - path: /stats/prometheus interval: 30s relabelings: - action: keep sourceLabels: [__meta_kubernetes_pod_container_name] regex: \"istio-proxy\" - action: keep sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape] - action: replace regex: (\\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) replacement: '[USD2]:USD1' sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: replace regex: (\\d+);((([0-9]+?)(\\.|USD)){4}) replacement: USD2:USD1 sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: labeldrop regex: \"__meta_kubernetes_pod_label_(.+)\" - sourceLabels: [__meta_kubernetes_namespace] action: replace targetLabel: namespace - sourceLabels: [__meta_kubernetes_pod_name] action: replace targetLabel: pod_name - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.5 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {} kiali: container: resources: limits: cpu: \"90m\" memory: \"245Mi\" requests: cpu: \"30m\" memory: \"108Mi\" global.oauthproxy: container: resources: requests: cpu: \"101m\" memory: \"256Mi\" limits: cpu: \"201m\" memory: \"512Mi\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.5 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}",
"oc get smcp basic -o yaml",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.5 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: trust: domain: red-mesh.local",
"spec: cluster: name:",
"spec: cluster: network:",
"spec: gateways: additionalEgress: <egressName>:",
"spec: gateways: additionalEgress: <egressName>: enabled:",
"spec: gateways: additionalEgress: <egressName>: requestedNetworkView:",
"spec: gateways: additionalEgress: <egressName>: routerMode:",
"spec: gateways: additionalEgress: <egressName>: service: metadata: labels: federation.maistra.io/egress-for:",
"spec: gateways: additionalEgress: <egressName>: service: ports:",
"spec: gateways: additionalIngress:",
"spec: gateways: additionalIgress: <ingressName>: enabled:",
"spec: gateways: additionalIngress: <ingressName>: routerMode:",
"spec: gateways: additionalIngress: <ingressName>: service: type:",
"spec: gateways: additionalIngress: <ingressName>: service: type:",
"spec: gateways: additionalIngress: <ingressName>: service: metadata: labels: federation.maistra.io/ingress-for:",
"spec: gateways: additionalIngress: <ingressName>: service: ports:",
"spec: gateways: additionalIngress: <ingressName>: service: ports: nodePort:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: green-mesh namespace: green-mesh-system spec: gateways: additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery",
"kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local",
"spec: security: trust: domain:",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"oc edit -n red-mesh-system smcp red-mesh",
"oc get smcp -n red-mesh-system",
"NAME READY STATUS PROFILES VERSION AGE red-mesh 10/10 ComponentsReady [\"default\"] 2.1.0 4m25s",
"kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert",
"metadata: name:",
"metadata: namespace:",
"spec: remote: addresses:",
"spec: remote: discoveryPort:",
"spec: remote: servicePort:",
"spec: gateways: ingress: name:",
"spec: gateways: egress: name:",
"spec: security: trustDomain:",
"spec: security: clientID:",
"spec: security: certificateChain: kind: ConfigMap name:",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert",
"oc create -n red-mesh-system -f servicemeshpeer.yaml",
"oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml",
"status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: \"2021-10-05T13:02:25Z\" lastFullSync: \"2021-10-05T13:02:25Z\" source: 10.128.2.149 watch: connected: true lastConnected: \"2021-10-05T13:02:55Z\" lastDisconnectStatus: 503 Service Unavailable lastFullSync: \"2021-10-05T13:05:43Z\"",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: \"true\" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: \"*\" name: \"*\" alias: namespace: bookinfo",
"metadata: name:",
"metadata: namespace:",
"spec: exportRules: - type:",
"spec: exportRules: - type: NameSelector nameSelector: namespace: name:",
"spec: exportRules: - type: NameSelector nameSelector: alias: namespace: name:",
"spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue>",
"spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue> aliases: - namespace: name: alias: namespace: name:",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: \"*\" name: ratings",
"kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: \"*\"",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project red-mesh-system",
"apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews",
"oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml>",
"oc create -n red-mesh-system -f export-to-green-mesh.yaml",
"oc get exportedserviceset <PeerMeshExportedTo> -o yaml",
"oc -n red-mesh-system get exportedserviceset green-mesh -o yaml",
"status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings",
"metadata: name:",
"metadata: namespace:",
"spec: importRules: - type:",
"spec: importRules: - type: NameSelector nameSelector: namespace: name:",
"spec: importRules: - type: NameSelector importAsLocal:",
"spec: importRules: - type: NameSelector nameSelector: namespace: name: alias: namespace: name:",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: \"*\"",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project green-mesh-system",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings",
"oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml>",
"oc create -n green-mesh-system -f import-from-red-mesh.yaml",
"oc get importedserviceset <PeerMeshImportedInto> -o yaml",
"oc -n green-mesh-system get importedserviceset/red-mesh -o yaml",
"status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: \"\" name: \"\" namespace: \"\"",
"kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project <smcp-system>",
"oc project green-mesh-system",
"oc edit -n <smcp-system> -f <ImportedServiceSet.yaml>",
"oc edit -n green-mesh-system -f import-from-red-mesh.yaml",
"oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443",
"oc project <smcp-system>",
"oc project green-mesh-system",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: \"ratings.bookinfo.svc.cluster.local\" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m",
"oc create -n <application namespace> -f <DestinationRule.yaml>",
"oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress",
"oc apply -f plugin.yaml",
"schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm",
"apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100",
"oc apply -f <extension>.yaml",
"apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value",
"cat <<EOM | oc apply -f - apiVersion: kiali.io/v1alpha1 kind: OSSMConsole metadata: namespace: openshift-operators name: ossmconsole EOM",
"delete ossmconsoles <custom_resource_name> -n <custom_resource_namespace>",
"for r in USD(oc get ossmconsoles --ignore-not-found=true --all-namespaces -o custom-columns=NS:.metadata.namespace,N:.metadata.name --no-headers | sed 's/ */:/g'); do oc delete ossmconsoles -n USD(echo USDr|cut -d: -f1) USD(echo USDr|cut -d: -f2); done",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> 1 spec: selector: 2 labels: app: <product_page> pluginConfig: <yaml_configuration> url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 phase: AUTHZ priority: 100",
"oc apply -f threescale-wasm-auth-bookinfo.yaml",
"apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-backend spec: host: su1.3scale.net trafficPolicy: tls: mode: SIMPLE sni: su1.3scale.net",
"oc apply -f service-entry-threescale-saas-backend.yml",
"oc apply -f destination-rule-threescale-saas-backend.yml",
"apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-system spec: host: multitenant.3scale.net trafficPolicy: tls: mode: SIMPLE sni: multitenant.3scale.net",
"oc apply -f service-entry-threescale-saas-system.yml",
"oc apply -f <destination-rule-threescale-saas-system.yml>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> spec: pluginConfig: api: v1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: system: name: <saas_porta> upstream: <object> token: <my_account_token> ttl: 300",
"apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: \"https://myaccount-admin.3scale.net/\" timeout: 5000",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: backend: name: backend upstream: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - id: \"2555417834789\" token: service_token authorities: - \"*.app\" - 0.0.0.0 - \"0.0.0.0:8443\" credentials: <object> mapping_rules: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> app_id: - <source_type>: <object> app_key: - <source_type>: <object>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key>",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>",
"aladdin:opensesame: Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - \"0\" keys: - azp - aud ops: - take: head: 1",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1 ,,,",
"apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 imagePullSecret: <optional_pull_secret_resource> phase: AUTHZ priority: 100 selector: labels: app: <product_page> pluginConfig: api: v1 system: name: <system_name> upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: <token> backend: name: <backend_name> upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' authorities: - \"*\" credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>",
"apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance",
"3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"",
"3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"",
"export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}",
"export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"oc get pods -n istio-system",
"oc logs istio-system",
"oc get pods -n openshift-operators",
"NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s",
"oc get pods -n openshift-operators-redhat",
"NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s",
"oc logs -n openshift-operators <podName>",
"oc logs -n openshift-operators istio-operator-bb49787db-zgr87",
"oc get pods -n istio-system",
"NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.3 4m2s",
"NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h",
"oc describe smcp <smcp-name> -n <controlplane-namespace>",
"oc describe smcp basic -n istio-system",
"oc get jaeger -n istio-system",
"NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m",
"oc get kiali -n istio-system",
"NAME AGE kiali 15m",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc project istio-system",
"oc edit smcp <smcp_name>",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: proxy: accessLogging: file: name: /dev/stdout #file name",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.5",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.5 gather <namespace>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.5 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: \"\" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {}",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true",
"logging:",
"logging: componentLevels:",
"logging: logAsJSON:",
"validationMessages:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.5 tracing: sampling: 100 type: Jaeger",
"tracing: sampling:",
"tracing: type:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali",
"spec: addons: kiali: name:",
"kiali: enabled:",
"kiali: install:",
"kiali: install: dashboard:",
"kiali: install: dashboard: viewOnly:",
"kiali: install: dashboard: enableGrafana:",
"kiali: install: dashboard: enablePrometheus:",
"kiali: install: dashboard: enableTracing:",
"kiali: install: service:",
"kiali: install: service: metadata:",
"kiali: install: service: metadata: annotations:",
"kiali: install: service: metadata: labels:",
"kiali: install: service: ingress:",
"kiali: install: service: ingress: metadata: annotations:",
"kiali: install: service: ingress: metadata: labels:",
"kiali: install: service: ingress: enabled:",
"kiali: install: service: ingress: contextPath:",
"install: service: ingress: hosts:",
"install: service: ingress: tls:",
"kiali: install: service: nodePort:",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.5 tracing: sampling: 100 type: Jaeger",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.5 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.5 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.5 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.5 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true",
"oc login https://<HOSTNAME>:6443",
"oc project istio-system",
"oc edit -n openshift-distributed-tracing -f jaeger.yaml",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true",
"oc get pods -n openshift-distributed-tracing",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"oc delete smmr -n istio-system default",
"oc get smcp -n istio-system",
"oc delete smcp -n istio-system <name_of_custom_resource>",
"oc -n openshift-operators delete ds -lmaistra-version",
"oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni clusterrole/ossm-cni clusterrolebinding/ossm-cni",
"oc delete clusterrole istio-view istio-edit",
"oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view",
"oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete",
"oc delete crds jaegers.jaegertracing.io",
"oc delete cm -n openshift-operators -lmaistra-version",
"oc delete sa -n openshift-operators -lmaistra-version",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.5",
"oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.5 gather <namespace>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]",
"spec: global: pathNormalization: <option>",
"{ \"runtime\": { \"symlink_root\": \"/var/lib/istio/envoy/runtime\" } }",
"oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap",
"oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to \"v1.0\" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings",
"oc get jaeger -n istio-system",
"NAME AGE jaeger 3d21h",
"oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml",
"oc delete jaeger jaeger -n istio-system",
"oc create -f /tmp/jaeger-cr.yaml -n istio-system",
"rm /tmp/jaeger-cr.yaml",
"oc delete -f <jaeger-cr-file>",
"oc delete -f jaeger-prod-elasticsearch.yaml",
"oc create -f <jaeger-cr-file>",
"oc get pods -n jaeger-system -w",
"spec: version: v1.1",
"apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.headers[<header>]: \"value\"",
"apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.regex.headers[<header>]: \"<regular expression>\"",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project istio-system",
"oc create -n istio-system -f istio-installation.yaml",
"oc get smcp -n istio-system",
"NAME READY STATUS PROFILES VERSION AGE basic-install 11/11 ComponentsReady [\"default\"] v1.1.18 4m25s",
"oc get pods -n istio-system -w",
"NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"oc new-project <your-project>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system default",
"oc edit smmr -n <controlplane-namespace>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name",
"oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true",
"apiVersion: \"authentication.istio.io/v1alpha1\" kind: \"Policy\" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {}",
"apiVersion: \"networking.istio.io/v1alpha3\" kind: \"DestinationRule\" metadata: name: \"default\" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: \"*.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3",
"oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false",
"oc delete secret istio.default",
"RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`",
"oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem",
"oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem",
"openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt",
"openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt",
"diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt",
"sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem",
"openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt",
"openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt",
"diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt",
"head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem",
"openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem",
"/tmp/pod-cert-chain-workload.pem: OK",
"oc delete secret cacerts -n istio-system",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"",
"oc apply -f gateway.yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080",
"oc apply -f vs.yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')",
"curl -s -I \"USDGATEWAY_URL/productpage\"",
"oc get svc istio-ingressgateway -n istio-system",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')",
"export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')",
"export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')",
"export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')",
"export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')",
"spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com",
"oc -n <control_plane_namespace> get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None",
"apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3",
"oc apply -f <VirtualService.yaml>",
"spec: hosts:",
"spec: http: - match:",
"spec: http: - match: - destination:",
"apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/virtual-service-all-v1.yaml",
"oc get virtualservices -o yaml",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml",
"oc get virtualservice reviews -o yaml",
"oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators",
"oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'",
"maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded",
"oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0",
"deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default",
"oc get deployment -n <namespace>",
"get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'",
"oc apply -n <namespace> -f deployment.yaml",
"oc apply -n bookinfo -f deployment-ratings-v1.yaml",
"oc get deployment -n <namespace> <deploymentName> -o yaml",
"oc get deployment -n bookinfo ratings-v1 -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"",
"oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks",
"oc edit cm -n istio-system istio",
"oc new-project bookinfo",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo",
"oc create -n istio-system -f servicemeshmemberroll-default.yaml",
"oc get smmr -n istio-system -o wide",
"NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/platform/kube/bookinfo.yaml",
"service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/bookinfo-gateway.yaml",
"gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created",
"export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/destination-rule-all.yaml",
"oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.5/samples/bookinfo/networking/destination-rule-all-mtls.yaml",
"destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created",
"oc get pods -n bookinfo",
"NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m",
"echo \"http://USDGATEWAY_URL/productpage\"",
"oc delete project bookinfo",
"oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'",
"curl \"http://USDGATEWAY_URL/productpage\"",
"export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')",
"echo USDJAEGER_URL",
"curl \"http://USDGATEWAY_URL/productpage\"",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one",
"istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret",
"gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1",
"mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits:",
"spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true",
"enabled",
"dashboard viewOnlyMode",
"ingress enabled",
"spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: \"https://grafana-istio-system.127.0.0.1.nip.io\" ingress: enabled: true",
"spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: \"http://jaeger-query-istio-system.127.0.0.1.nip.io\" ingress: enabled: true",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one",
"tracing: enabled:",
"jaeger: template:",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"",
"tracing: enabled:",
"ingress: enabled:",
"jaeger: template:",
"elasticsearch: nodeCount:",
"requests: cpu:",
"requests: memory:",
"limits: cpu:",
"limits: memory:",
"oc get route -n istio-system external-jaeger",
"NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...]",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"external-jaeger\" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local",
"apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"",
"tracing: enabled:",
"ingress: enabled:",
"jaeger: template:",
"elasticsearch: nodeCount:",
"requests: cpu:",
"requests: memory:",
"limits: cpu:",
"limits: memory:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: \"55 23 * * *\"",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true",
"apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance",
"3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"",
"3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"",
"export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}",
"export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs",
"apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"",
"oc get pods -n istio-system",
"oc logs istio-system",
"oc delete smmr -n istio-system default",
"oc get smcp -n istio-system",
"oc delete smcp -n istio-system <name_of_custom_resource>",
"oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io",
"oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io",
"oc delete -n openshift-operators daemonset/istio-node",
"oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni",
"oc delete clusterrole istio-view istio-edit",
"oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view",
"oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete",
"oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete",
"oc delete crds jaegers.jaegertracing.io",
"oc delete svc admission-controller -n <operator-project>",
"oc delete project <istio-system-project>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/service_mesh/index |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.5 Documentation Data Grid 8.5 Component Details Supported Configurations for Data Grid 8.5 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/configuring_data_grid_caches/rhdg-docs_datagrid |
Chapter 7. Assigning permissions and access using roles and groups | Chapter 7. Assigning permissions and access using roles and groups Roles and groups have a similar purpose, which is to give users access and permissions to use applications. Groups are a collection of users to which you apply roles and attributes. Roles define specific applications permissions and access control. A role typically applies to one type of user. For example, an organization may include admin , user , manager , and employee roles. An application can assign access and permissions to a role and then assign multiple users to that role so the users have the same access and permissions. For example, the Admin Console has roles that give permission to users to access different parts of the Admin Console. There is a global namespace for roles and each client also has its own dedicated namespace where roles can be defined. 7.1. Creating a realm role Realm-level roles are a namespace for defining your roles. To see the list of roles, click Roles in the menu. Procedure Click Add Role . Enter a Role Name . Enter a Description . Click Save . Add role The description field can be localizable by specifying a substitution variable with USD{var-name} strings. The localized value is configured to your theme within the themes property files. See the Server Developer Guide for more details. 7.2. Client roles Client roles are namespaces dedicated to clients. Each client gets its own namespace. Client roles are managed under the Roles tab for each client. You interact with this UI the same way you do for realm-level roles. 7.3. Converting a role to a composite role Any realm or client level role can become a composite role . A composite role is a role that has one or more additional roles associated with it. When a composite role is mapped to a user, the user gains the roles associated with the composite role. This inheritance is recursive so users also inherit any composite of composites. However, we recommend that composite roles are not overused. Procedure Click Roles in the menu. Click the role that you want to convert. Toggle Composite Roles to ON . Composite role The role selection UI is displayed on the page and you can associate realm level and client level roles to the composite role you are creating. In this example, the employee realm-level role is associated with the developer composite role. Any user with the developer role also inherits the employee role. Note When creating tokens and SAML assertions, any composite also has its associated roles added to the claims and assertions of the authentication response sent back to the client. 7.4. Assigning role mappings You can assign role mappings to a user through the Role Mappings tab for that user. Procedure Click Users in the menu. Click the user that you want to perform a role mapping on. If the user is not displayed, click View all users . Click the Role Mappings tab. Click the role you want to assign to the user in the Available Roles box. Click Add selected . Role mappings In the preceding example, we are assigning the composite role developer to a user. That role was created in the Composite Roles topic. Effective role mappings When the developer role is assigned, the employee role associated with the developer composite is displayed in the Effective Roles box. Effective Roles are the roles explicitly assigned to users and roles that are inherited from composites. 7.5. Using default roles Use default roles to automatically assign user role mappings when a user is created or imported through Identity Brokering . Procedure Click Roles in the menu Click the Default Roles tab. Default roles This screenshot shows that some default roles already exist. 7.6. Role scope mappings On creation of an OIDC access token or SAML assertion, the user role mappings become claims within the token or assertion. Applications use these claims to make access decisions on the resources controlled by the application. Red Hat Single Sign-On digitally signs access tokens and applications re-use them to invoke remotely secured REST services. However, these tokens have an associated risk. An attacker can obtain these tokens and use their permissions to compromise your networks. To prevent this situation, use Role Scope Mappings . Role Scope Mappings limit the roles declared inside an access token. When a client requests a user authentication, the access token they receive contains only the role mappings that are explicitly specified for the client's scope. The result is that you limit the permissions of each individual access token instead of giving the client access to all the users permissions. By default, each client gets all the role mappings of the user. You can view the role mappings in the Scope tab of each client. Full scope By default, the effective roles of scopes are every declared role in the realm. To change this default behavior, toggle Full Scope Allowed to ON and declare the specific roles you want in each client. You can also use client scopes to define the same role scope mappings for a set of clients. Partial scope 7.7. Groups Groups in Red Hat Single Sign-On manage a common set of attributes and role mappings for each user. Users can be members of any number of groups and inherit the attributes and role mappings assigned to each group. To manage groups, click Groups in the menu. Groups Groups are hierarchical. A group can have multiple subgroups but a group can have only one parent. Subgroups inherit the attributes and role mappings from their parent. Users inherit the attributes and role mappings from their parent as well. If you have a parent group and a child group, and a user that belongs only to the child group, the user in the child group inherits the attributes and role mappings of both the parent group and the child group. The following example includes a top-level Sales group and a child North America subgroup. To add a group: Click the group. Click New . Select the Groups icon in the tree to make a top-level group. Enter a group name in the Create Group screen. Click Save . The group management page is displayed. Group Attributes and role mappings you define are inherited by the groups and users that are members of the group. To add a user to a group: Click Users in the menu. Click the user that you want to perform a role mapping on. If the user is not displayed, click View all users . Click Groups . User groups Select a group from the Available Groups tree. Click Join . To remove a group from a user: Select the group from the Group Membership tree. Click Leave . In this example, the user jimlincoln is in the North America group. You can see jimlincoln displayed under the Members tab for the group. Group membership 7.7.1. Groups compared to roles Groups and roles have some similarities and differences. In Red Hat Single Sign-On, groups are a collection of users to which you apply roles and attributes. Roles define types of users and applications assign permissions and access control to roles. Composite Roles are similar to Groups as they provide the same functionality. The difference between them is conceptual. Composite roles apply the permission model to a set of services and applications. Use composite roles to manage applications and services. Groups focus on collections of users and their roles in an organization. Use groups to manage users. 7.7.2. Using default groups To automatically assign group membership to any users who is created or who is imported through Identity Brokering , you use default groups. Click Groups in the menu. Click the Default Groups tab. Default groups This screenshot shows that some default groups already exist. | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_administration_guide/assigning_permissions_and_access_using_roles_and_groups |
Chapter 3. Creating a policy | Chapter 3. Creating a policy An Enterprise Contract policy is a rule or set of rules and Enterprise Contract-specific annotations. Enterprise Contract can perform several types of policy checks, including checking all of policy rules required for Red Hat products . Enterprise Contract uses the general purpose policy engine called Open Policy Agent, or OPA. OPA defines its policy rules in their own language, called Rego. This means that the policy rules from OPA that are in an Enterprise Contract policy are also defined in Rego. Procedure Create a Rego file to define a new policy rule, as in the following example: 1 The METADATA comment block- the first 10 lines of code, which are all preceded by hashtags (#)--is how rego specifies rules annotations so that Enterprise Contract can include those annotation details its successes and violations report. For more information about rego metadata and annotations, see Metadata . For more information about the annotations that Enterprise Contract policy rules must contain, see Rule annotations . 2 This single policy rule verifies that the builder.id in your new policy rule matches the builder.id in your Supply-chain Levels for Software Artifacts, or SLSA , provenance. 3 input.attestations is a rego object that contains all of the information about your container image, its signature, and its attestations. See Policy Input for more information about where and how Enterprise Contract defines input.attestations contents. Tip You can save the input.attestations object to a JSON file so that you can borrow from it when you specify new policy rules. To save input.attestations as a JSON file, run a command that's similar to the following example: Create a policy configuration to use your new policy rule, as in the following example: echo " --- sources: - policy: - USD(pwd)/rules.rego " > policy.yaml Use your new policy to validate your container image, and to display additional information in the successes and violations report, as in the following example: Verification Check the Successes and violations report to make sure that your new rule is in the successes list. Additional resources For a set of useful Enterprise Contract policy rules, see the ec-policies GitHub repository. For more information about OPA and Rego, see OPA's Policy Language content. For more information about SLSA provenance, see SLSA Provenance . 3.1. Configuring a policy You can configure an Enterprise Contract policy with an inline JSON or YAML string. This policy, sometimes called a config or a contract , specifies where Enterprise Contract should find the rules and data to use to apply the policies you want to enforce. You can also include or exclude a single rule or a particular package of rules. Procedure Configure your policy in the command line as a JSON or YAML string, as in the following example: (Optional) Exclude a particular package of rules from your Enterprise Contract policy, as in the following example: This command includes every rule from every package except for the rules in the specified packages. (Optional) Exclude a single rule, as in the following example: This commands includes every rule from the attestation_task_bundle package except for the unacceptable_task_bundle rule. (Optional) Include rules from only a particular package, as in the following example: This command includes only the rules from the specified packages. (Optional) Include only some rules from a particular package. This means that you can specify both include and exclude to select only the rules you want your Enterprise Contract policy to include, as in the following example: 1 The asterisk (*) acts as a wildcard to match any package. Note that it does not match partial names, which means that, for example, you can't specify "s*" to match every package that starts with "s". These commands specify that you want to include only the unacceptable_task_bundle rule from the attestation_task_bundle package, and exclude all the other rules in that package. (Optional) Exclude certain checks so that Enterprise Contract can validate your container image even if those checks fail or don't complete, as in the following example: This command specifies that, if either of the identified checks fails or doesn't complete, Enterprise Contract can still finish to validate your container image. (Optional) Modify the defaults for rules in a package by running either the config.policy.include command or the config.policy.exclude command, along with a list of strings. Your list of strings should include one of the following: "package name" - Choose from the packages in the Available rule collections list. "rule name" - Specify a rule name by entering the name of the package and the rule code, separated by a dot (.), as in this example: attestation_type.unknown_att_type . You can find rule codes under "Attestation type" here . "package name:term" - Some policy rules process a list of items. When you add "term" to the "package name" string, you can exclude or include a particular item from that list. This works similarly to "package name," except that it applies only to policy rules in the package that match that term. For example, if you run the test package, you can choose to ignore a given test case but include all the others. "rule name:term" - This is similar to "package name:term" except that, instead of including or excluding an item from a package, you can include ot exclude a particular package policy rule . "@collection name" - Add this to your string to specify a predefined collection of rules. TMake sure you prefix the collection name with the @ symbol. Choose from the available rule collections here . Additional resources For a comprehensive list of release policy details, see Release Policy . | [
"echo 'package zero_to_hero import future.keywords.contains import future.keywords.if import future.keywords.in METADATA 1 title: Builder ID description: Verify the SLSA Provenance has the builder.id set to the expected value. custom: short_name: builder_id 2 failure_msg: The builder ID %q is not the expected %q solution: >- Ensure the correct build system was used to build the container image. deny contains result if { some attestation in input.attestations 3 attestation.statement.predicateType == \"https://slsa.dev/provenance/v0.2\" expected := \"https://localhost/dummy-id\" got := attestation.statement.predicate.builder.id expected != got result := { \"code\": \"zero_to_hero.builder_id\", \"msg\": sprintf(\"The builder ID %q is not expected, %q\", [got, expected]) } } ' > rules.rego",
"ec validate image --public-key cosign.pub --image \"USDREPOSITORY:latest\" --policy policy.yaml --show-successes --info --output yaml",
"echo \" --- sources: - policy: - USD(pwd)/rules.rego \" > policy.yaml",
"ec validate image --public-key cosign.pub --image \"USDREPOSITORY:latest\" --policy policy.yaml --show-successes --info --output yaml",
"ec validate image --policy '{ \"configuration\": { \"include\": [\"@minimal\"] }, \"sources\": [ { \"policy\": [\"oci::quay.io/enterprise-contract/ec-release-policy:latest\"], \"data\": [\"git::https://github.com/enterprise-contract/ec-policies//example/data\"] } ] }'",
"exclude: - attestation_task_bundle - slsa_build_scripted_build",
"exclude: - attestation_task_bundle.unacceptable_task_bundle",
"include: - test - java",
"include: - \"*\" 1 - attestation_task_bundle.unacceptable_task_bundle exclude: - attestation_task_bundle.*",
"exclude: - test:get-clair-scan - test:clamav-scan"
]
| https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/managing_compliance_with_enterprise_contract/proc_creating-an-ec-policy_enterprise_contract-rhtap |
Edge computing | Edge computing OpenShift Container Platform 4.17 Configure and deploy OpenShift Container Platform clusters at the network edge Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/edge_computing/index |
24.6.2. Running the Net-SNMP Daemon | 24.6.2. Running the Net-SNMP Daemon The net-snmp package contains snmpd , the SNMP Agent Daemon. This section provides information on how to start, stop, and restart the snmpd service, and shows how to enable it in a particular runlevel. For more information on the concept of runlevels and how to manage system services in Red Hat Enterprise Linux in general, see Chapter 12, Services and Daemons . 24.6.2.1. Starting the Service To run the snmpd service in the current session, type the following at a shell prompt as root : service snmpd start To configure the service to be automatically started at boot time, use the following command: chkconfig snmpd on This will enable the service in runlevel 2, 3, 4, and 5. Alternatively, you can use the Service Configuration utility as described in Section 12.2.1.1, "Enabling and Disabling a Service" . 24.6.2.2. Stopping the Service To stop the running snmpd service, type the following at a shell prompt as root : service snmpd stop To disable starting the service at boot time, use the following command: chkconfig snmpd off This will disable the service in all runlevels. Alternatively, you can use the Service Configuration utility as described in Section 12.2.1.1, "Enabling and Disabling a Service" . 24.6.2.3. Restarting the Service To restart the running snmpd service, type the following at a shell prompt: service snmpd restart This will stop the service and start it again in quick succession. To only reload the configuration without stopping the service, run the following command instead: service snmpd reload This will cause the running snmpd service to reload the configuration. Alternatively, you can use the Service Configuration utility as described in Section 12.2.1.2, "Starting, Restarting, and Stopping a Service" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-system_monitoring_tools-net-snmp-running |
Chapter 17. Managing containers by using the RHEL web console | Chapter 17. Managing containers by using the RHEL web console You can use the Red Hat Enterprise Linux web console to manage your containers and pods. With the web console, you can create containers as a non-root or root user. As a root user, you can create system containers with extra privileges and options. As a non-root user, you have two options: To only create user containers, you can use the web console in its default mode - Limited access . To create both user and system containers, click Administrative access in the top panel of the web console page. For details about differences between root and rootless containers, see Special considerations for rootless containers . 17.1. Creating containers in the web console You can create a container and add port mappings, volumes, environment variables, health checks, and so on. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. Click Create container . In the Name field, enter the name of your container. Provide additional information in the Details tab. Available only with the administrative access : Select the Owner of the container: System or User. In the Image drop-down list select or search the container image in selected registries. Optional: Check the Pull latest image checkbox to pull the latest container image. The Command field specifies the command. You can change the default command if you need. Optional: Check the With terminal checkbox to run your container with a terminal. The Memory limit field specifies the memory limit for the container. To change the default memory limit, check the checkbox and specify the limit. Available only for system containers : In the CPU shares field , specify the relative amount of CPU time. Default value is 1024. Check the checkbox to modify the default value. Available only for system containers : In the Restart policy drop-down menu, select one of the following options: No (default value): No action. On Failure : Restarts a container on failure. Always : Restarts a container when exits or after rebooting the system. Provide the required information in the Integration tab. Click Add port mapping to add port mapping between the container and host system. Enter the IP address , Host port , Container port and Protocol . Click Add volume to add volume. Enter the host path , Container path . You can check the Writable option checkbox to create a writable volume. In the SELinux drop-down list, select one of the following options: No Label , Shared or Private . Click Add variable to add environment variable. Enter the Key and Value . Provide the required information in the Health check tab. In the Command fields, enter the 'healthcheck' command. Specify the healthcheck options: Interval (default is 30 seconds) Timeout (default is 30 seconds) Start period Retries (default is 3) When unhealthy: Select one of the following options: No action (default): Take no action. Restart : Restart the container. Stop : Stop the container. Force stop : Force stops the container, it does not wait for the container to exit. Click Create and run to create and run the container. Note You can click Create to only create the container. Verification Click Podman containers in the main menu. You can see the newly created container in the Containers table. 17.2. Inspecting containers in the web console You can display detailed information about a container in the web console. Prerequisites The container was created. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. Click the > arrow icon to see details of the container. In the Details tab, you can see container ID, Image, Command, Created (timestamp when the container was created), and its State. Available only for system containers : You can also see IP address, MAC address, and Gateway address. In the Integration tab, you can see environment variables, port mappings, and volumes. In the Log tab, you can see container logs. In the Console tab, you can interact with the container using the command line. 17.3. Changing the state of containers in the web console In the Red Hat Enterprise Linux web console, you can start, stop, restart, pause, and rename containers on the system. Prerequisites The container was created. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Containers table, select the container you want to modify and click the overflow menu and select the action you want to perform: Start Stop Force stop Restart Force restart Pause Rename 17.4. Committing containers in the web console You can create a new image based on the current state of the container. Prerequisites The container was created. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Containers table, select the container you want to modify and click the overflow menu and select Commit . In the Commit container form, add the following details: In the New image name field, enter the image name. Optional: In the Tag field, enter the tag. Optional: In the Author field, enter your name. Optional: In the Command field, change command if you need. Optional: Check the Options you need: Pause container when creating image: The container and its processes are paused while the image is committed. Use legacy Docker format: if you do not use the Docker image format, the OCI format is used. Click Commit . Verification Click the Podman containers in the main menu. You can see the newly created image in the Images table. 17.5. Creating a container checkpoint in the web console Using the web console, you can set a checkpoint on a running container or an individual application and store its state to disk. Note Creating a checkpoint is available only for system containers. Prerequisites The container is running. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Containers table, select the container you want to modify and click the overflow icon menu and select Checkpoint . Optional: In the Checkpoint container form, check the options you need: Keep all temporary checkpoint files: keep all temporary log and statistics files created by CRIU during checkpointing. These files are not deleted if checkpointing fails for further debugging. Leave running after writing checkpoint to disk: leave the container running after checkpointing instead of stopping it. Support preserving established TCP connections Click Checkpoint . Verification Click the Podman containers in the main menu. Select the container you checkpointed, click the overflow menu icon and verify that there is a Restore option. 17.6. Restoring a container checkpoint in the web console You can use data saved to restore the container after a reboot at the same point in time it was checkpointed. Note Creating a checkpoint is available only for system containers. Prerequisites The container was checkpointed. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Containers table, select the container you want to modify and click the overflow menu and select Restore . Optional: In the Restore container form, check the options you need: Keep all temporary checkpoint files : Keep all temporary log and statistics files created by CRIU during checkpointing. These files are not deleted if checkpointing fails for further debugging. Restore with established TCP connections Ignore IP address if set statically : If the container was started with IP address the restored container also tries to use that IP address and restore fails if that IP address is already in use. This option is applicable if you added port mapping in the Integration tab when you create the container. Ignore MAC address if set statically : If the container was started with MAC address the restored container also tries to use that MAC address and restore fails if that MAC address is already in use. Click Restore . Verification Click the Podman containers in the main menu. You can see that the restored container in the Containers table is running. 17.7. Deleting containers in the web console You can delete an existing container using the web console. Prerequisites The container exists on your system. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Containers table, select the container you want to delete and click the overflow menu and select Delete . The pop-up window appears. Click Delete to confirm your choice. Verification Click the Podman containers in the main menu. The deleted container should not be listed in the Containers table. 17.8. Creating pods in the web console You can create pods in the RHEL web console interface. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. Click Create pod . Provide additional information in the Create pod form: Available only with the administrative access : Select the Owner of the container: System or User. In the Name field, enter the name of your container. Click Add port mapping to add port mapping between container and host system. Enter the IP address, Host port, Container port and Protocol. Click Add volume to add volume. Enter the host path, Container path. You can check the Writable checkbox to create a writable volume. In the SELinux drop-down list, select one of the following options: No Label, Shared or Private. Click Create . Verification Click Podman containers in the main menu. You can see the newly created pod in the Containers table. 17.9. Creating containers in the pod in the web console You can create a container in a pod. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. Click Create container in pod . In the Name field, enter the name of your container. Provide the required information in the Details tab. Available only with the administrative access : Select the Owner of the container: System or User. In the Image drop down list select or search the container image in selected registries. Optional: Check the Pull latest image checkbox to pull the latest container image. The Command field specifies the command. You can change the default command if you need. Optional: Check the With terminal checkbox to run your container with a terminal. The Memory limit field specifies the memory limit for the container. To change the default memory limit, check the checkbox and specify the limit. Available only for system containers : In the CPU shares field , specify the relative amount of CPU time. Default value is 1024. Check the checkbox to modify the default value. Available only for system containers : In the Restart policy drop down menu, select one of the following options: No (default value): No action. On Failure : Restarts a container on failure. Always : Restarts container when exits or after system boot. Provide the required information in the Integration tab. Click Add port mapping to add port mapping between the container and host system. Enter the IP address , Host port , Container port and Protocol . Click Add volume to add volume. Enter the host path , Container path . You can check the Writable option checkbox to create a writable volume. In the SELinux drop down list, select one of the following options: No Label , Shared , or Private . Click Add variable to add environment variable. Enter the Key and Value . Provide the required information in the Health check tab. In the Command fields, enter the healthcheck command. Specify the healthcheck options: Interval (default is 30 seconds) Timeout (default is 30 seconds) Start period Retries (default is 3) When unhealthy: Select one of the following options: No action (default): Take no action. Restart : Restart the container. Stop : Stop the container. Force stop : Force stops the container, it does not wait for the container to exit. Note The owner of the container is the same as the owner of the pod. Note In the pod, you can inspect containers, change the status of containers, commit containers, or delete containers. Verification Click Podman containers in the main menu. You can see the newly created container in the pod under the Containers table. 17.10. Changing the state of pods in the web console You can change the status of the pod. Prerequisites The pod was created. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Containers table, select the pod you want to modify and click the overflow menu and select the action you want to perform: Start Stop Force stop Restart Force restart Pause 17.11. Deleting pods in the web console You can delete an existing pod using the web console. Prerequisites The pod exists on your system. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Containers table, select the pod you want to delete and click the overflow menu and select Delete . In the following pop-up window, click Delete to confirm your choice. Warning You remove all containers in the pod. Verification Click the Podman containers in the main menu. The deleted pod should not be listed in the Containers table. | [
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/managing-containers-by-using-the-rhel-web-console |
Storage Administration Guide | Storage Administration Guide Red Hat Enterprise Linux 7 Deploying and configuring single-node storage in RHEL 7 Edited by Marek Suchanek Red Hat Customer Content Services [email protected] Edited by Apurva Bhide Red Hat Customer Content Services [email protected] Milan Navratil Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services Don Domingo Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/index |
18.8. Managing a Virtual Network | 18.8. Managing a Virtual Network To configure a virtual network on your system: From the Edit menu, select Connection Details . Figure 18.10. Selecting a host physical machine's details This will open the Connection Details menu. Click the Virtual Networks tab. Figure 18.11. Virtual network configuration All available virtual networks are listed on the left-hand box of the menu. You can edit the configuration of a virtual network by selecting it from this box and editing as you see fit. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-Virtualization-Virtual_Networking-Managing_a_virtual_network |
Chapter 16. Set Up Isolation Levels | Chapter 16. Set Up Isolation Levels 16.1. About Isolation Levels Isolation levels determine when readers can view a concurrent write. READ_COMMITTED and REPEATABLE_READ are the two isolation modes offered in Red Hat JBoss Data Grid. READ_COMMITTED . This isolation level is applicable to a wide variety of requirements. This is the default value in Remote Client-Server and Library modes. REPEATABLE_READ . Important The only valid value for locks in Remote Client-Server mode is the default READ_COMMITTED value. The value explicitly specified with the isolation value is ignored. If the locking element is not present in the configuration, the default isolation value is READ_COMMITTED . For isolation mode configuration examples in JBoss Data Grid, see the lock striping configuration samples: See Section 15.2, "Configure Lock Striping (Remote Client-Server Mode)" for a Remote Client-Server mode configuration sample. See Section 15.3, "Configure Lock Striping (Library Mode)" for a Library mode configuration sample. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-Set_Up_Isolation_Levels |
Chapter 22. Manually configuring the /etc/resolv.conf file | Chapter 22. Manually configuring the /etc/resolv.conf file By default, NetworkManager dynamically updates the /etc/resolv.conf file with the DNS settings from active NetworkManager connection profiles. However, you can disable this behavior and manually configure DNS settings in /etc/resolv.conf . Note Alternatively, if you require a specific order of DNS servers in /etc/resolv.conf , see Configuring the order of DNS servers . 22.1. Disabling DNS processing in the NetworkManager configuration By default, NetworkManager manages DNS settings in the /etc/resolv.conf file, and you can configure the order of DNS servers. Alternatively, you can disable DNS processing in NetworkManager if you prefer to manually configure DNS settings in /etc/resolv.conf . Procedure As the root user, create the /etc/NetworkManager/conf.d/90-dns-none.conf file with the following content by using a text editor: Reload the NetworkManager service: Note After you reload the service, NetworkManager no longer updates the /etc/resolv.conf file. However, the last contents of the file are preserved. Optional: Remove the Generated by NetworkManager comment from /etc/resolv.conf to avoid confusion. Verification Edit the /etc/resolv.conf file and manually update the configuration. Reload the NetworkManager service: Display the /etc/resolv.conf file: If you successfully disabled DNS processing, NetworkManager did not override the manually configured settings. Troubleshooting Display the NetworkManager configuration to ensure that no other configuration file with a higher priority overrode the setting: Additional resources NetworkManager.conf(5) man page on your system Configuring the order of DNS servers using NetworkManager 22.2. Replacing /etc/resolv.conf with a symbolic link to manually configure DNS settings By default, NetworkManager manages DNS settings in the /etc/resolv.conf file, and you can configure the order of DNS servers. Alternatively, you can disable DNS processing in NetworkManager if you prefer to manually configure DNS settings in /etc/resolv.conf . For example, NetworkManager does not automatically update the DNS configuration if /etc/resolv.conf is a symbolic link. Prerequisites The NetworkManager rc-manager configuration option is not set to file . To verify, use the NetworkManager --print-config command. Procedure Create a file, such as /etc/resolv.conf.manually-configured , and add the DNS configuration for your environment to it. Use the same parameters and syntax as in the original /etc/resolv.conf . Remove the /etc/resolv.conf file: Create a symbolic link named /etc/resolv.conf that refers to /etc/resolv.conf.manually-configured : Additional resources resolv.conf(5) and NetworkManager.conf(5) man pages on your system Configuring the order of DNS servers using NetworkManager | [
"[main] dns=none",
"systemctl reload NetworkManager",
"systemctl reload NetworkManager",
"cat /etc/resolv.conf",
"NetworkManager --print-config dns=none",
"rm /etc/resolv.conf",
"ln -s /etc/resolv.conf.manually-configured /etc/resolv.conf"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/manually-configuring-the-etc-resolv-conf-file_configuring-and-managing-networking |
Chapter 148. Zip File | Chapter 148. Zip File The Zip File Data Format is a message compression and de-compression format. Messages can be marshalled (compressed) to Zip files containing a single entry, and Zip files containing a single entry can be unmarshalled (decompressed) to the original file contents. This data format supports ZIP64, as long as Java 7 or later is being used]. 148.1. Dependencies When using zipfile with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-zipfile-starter</artifactId> </dependency> 148.2. ZipFile Options The Zip File dataformat supports 4 options, which are listed below. Name Default Java Type Description usingIterator Boolean If the zip file has more then one entry, the setting this option to true, allows to work with the splitter EIP, to split the data using an iterator in a streaming mode. allowEmptyDirectory Boolean If the zip file has more then one entry, setting this option to true, allows to get the iterator even if the directory is empty. preservePathElements Boolean If the file name contains path elements, setting this option to true, allows the path to be maintained in the zip file. maxDecompressedSize Integer Set the maximum decompressed size of a zip file (in bytes). The default value if not specified corresponds to 1 gigabyte. An IOException will be thrown if the decompressed size exceeds this amount. Set to -1 to disable setting a maximum decompressed size. 148.3. Marshal In this example we marshal a regular text/XML payload to a compressed payload using Zip file compression, and send it to an ActiveMQ queue called MY_QUEUE. from("direct:start") .marshal().zipFile() .to("activemq:queue:MY_QUEUE"); The name of the Zip entry inside the created Zip file is based on the incoming CamelFileName message header, which is the standard message header used by the file component. Additionally, the outgoing CamelFileName message header is automatically set to the value of the incoming CamelFileName message header, with the ".zip" suffix. So for example, if the following route finds a file named "test.txt" in the input directory, the output will be a Zip file named "test.txt.zip" containing a single Zip entry named "test.txt": from("file:input/directory?antInclude=*/.txt") .marshal().zipFile() .to("file:output/directory"); If there is no incoming CamelFileName message header (for example, if the file component is not the consumer), then the message ID is used by default, and since the message ID is normally a unique generated ID, you will end up with filenames like ID-MACHINENAME-2443-1211718892437-1-0.zip . If you want to override this behavior, then you can set the value of the CamelFileName header explicitly in your route: from("direct:start") .setHeader(Exchange.FILE_NAME, constant("report.txt")) .marshal().zipFile() .to("file:output/directory"); This route would result in a Zip file named "report.txt.zip" in the output directory, containing a single Zip entry named "report.txt". 148.4. Unmarshal In this example we unmarshal a Zip file payload from an ActiveMQ queue called MY_QUEUE to its original format, and forward it for processing to the UnZippedMessageProcessor . from("activemq:queue:MY_QUEUE") .unmarshal().zipFile() .process(new UnZippedMessageProcessor()); If the zip file has more then one entry, the usingIterator option of ZipFileDataFormat to be true, and you can use splitter to do the further work. ZipFileDataFormat zipFile = new ZipFileDataFormat(); zipFile.setUsingIterator(true); from("file:src/test/resources/org/apache/camel/dataformat/zipfile/?delay=1000&noop=true") .unmarshal(zipFile) .split(body(Iterator.class)).streaming() .process(new UnZippedMessageProcessor()) .end(); Or you can use the ZipSplitter as an expression for splitter directly like this from("file:src/test/resources/org/apache/camel/dataformat/zipfile?delay=1000&noop=true") .split(new ZipSplitter()).streaming() .process(new UnZippedMessageProcessor()) .end(); 148.4.1. Aggregate Note This aggregation strategy requires eager completion check to work properly. In this example we aggregate all text files found in the input directory into a single Zip file that is stored in the output directory. from("file:input/directory?antInclude=*/.txt") .aggregate(constant(true), new ZipAggregationStrategy()) .completionFromBatchConsumer().eagerCheckCompletion() .to("file:output/directory"); The outgoing CamelFileName message header is created using java.io.File.createTempFile, with the ".zip" suffix. If you want to override this behavior, then you can set the value of the CamelFileName header explicitly in your route: from("file:input/directory?antInclude=*/.txt") .aggregate(constant(true), new ZipAggregationStrategy()) .completionFromBatchConsumer().eagerCheckCompletion() .setHeader(Exchange.FILE_NAME, constant("reports.zip")) .to("file:output/directory"); 148.5. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.dataformat.zipfile.allow-empty-directory If the zip file has more then one entry, setting this option to true, allows to get the iterator even if the directory is empty. false Boolean camel.dataformat.zipfile.enabled Whether to enable auto configuration of the zipfile data format. This is enabled by default. Boolean camel.dataformat.zipfile.max-decompressed-size Set the maximum decompressed size of a zip file (in bytes). The default value if not specified corresponds to 1 gigabyte. An IOException will be thrown if the decompressed size exceeds this amount. Set to -1 to disable setting a maximum decompressed size. 1073741824 Long camel.dataformat.zipfile.preserve-path-elements If the file name contains path elements, setting this option to true, allows the path to be maintained in the zip file. false Boolean camel.dataformat.zipfile.using-iterator If the zip file has more then one entry, the setting this option to true, allows to work with the splitter EIP, to split the data using an iterator in a streaming mode. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-zipfile-starter</artifactId> </dependency>",
"from(\"direct:start\") .marshal().zipFile() .to(\"activemq:queue:MY_QUEUE\");",
"from(\"file:input/directory?antInclude=*/.txt\") .marshal().zipFile() .to(\"file:output/directory\");",
"from(\"direct:start\") .setHeader(Exchange.FILE_NAME, constant(\"report.txt\")) .marshal().zipFile() .to(\"file:output/directory\");",
"from(\"activemq:queue:MY_QUEUE\") .unmarshal().zipFile() .process(new UnZippedMessageProcessor());",
"ZipFileDataFormat zipFile = new ZipFileDataFormat(); zipFile.setUsingIterator(true); from(\"file:src/test/resources/org/apache/camel/dataformat/zipfile/?delay=1000&noop=true\") .unmarshal(zipFile) .split(body(Iterator.class)).streaming() .process(new UnZippedMessageProcessor()) .end();",
"from(\"file:src/test/resources/org/apache/camel/dataformat/zipfile?delay=1000&noop=true\") .split(new ZipSplitter()).streaming() .process(new UnZippedMessageProcessor()) .end();",
"from(\"file:input/directory?antInclude=*/.txt\") .aggregate(constant(true), new ZipAggregationStrategy()) .completionFromBatchConsumer().eagerCheckCompletion() .to(\"file:output/directory\");",
"from(\"file:input/directory?antInclude=*/.txt\") .aggregate(constant(true), new ZipAggregationStrategy()) .completionFromBatchConsumer().eagerCheckCompletion() .setHeader(Exchange.FILE_NAME, constant(\"reports.zip\")) .to(\"file:output/directory\");"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-zipfile-dataformat-starter |
Chapter 1. Overview | Chapter 1. Overview .NET images are added to OpenShift by importing image stream definitions from s2i-dotnetcore . The image stream definitions include the dotnet image stream which contains sdk images for different supported versions of .NET. Life Cycle and Support Policies for the .NET Program provides an up-to-date overview of supported versions. Version Tag Alias .NET 8.0 dotnet:8.0-ubi8 dotnet:8.0 .NET 9.0 dotnet:9.0-ubi8 dotnet:9.0 The sdk images have corresponding runtime images which are defined under the dotnet-runtime image stream. The container images work across different versions of Red Hat Enterprise Linux and OpenShift. The UBI-8 based images (suffix -ubi8) are hosted on the registry.access.redhat.com and do not require authentication. | null | https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_openshift_container_platform/con_overview-of-dotnet-on-openshift_getting-started-with-dotnet-on-openshift |
Preface | Preface As a developer or system administrator, You can integrate Red Hat Decision Manager with other products and components, such as Spring Boot, Red Hat Single Sign-On, and other supported products. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/pr01 |
Chapter 17. Graphical User Interface Tools for Guest Virtual Machine Management | Chapter 17. Graphical User Interface Tools for Guest Virtual Machine Management In addition to virt-manager , Red Hat Enterprise Linux 6 provides the following tools that enable you to access your guest virtual machine's console. 17.1. virt-viewer virt-viewer is a minimalistic command-line utility for displaying the graphical console of a guest virtual machine. The console is accessed using the VNC or SPICE protocol. The guest can be referred to by its name, ID, or UUID. If the guest is not already running, the viewer can be set to wait until is starts before attempting to connect to the console. The viewer can connect to remote hosts to get the console information and then also connect to the remote console using the same network transport. In comparison with virt-manager , virt-viewer offers a smaller set of features, but is less resource-demanding. In addition, unlike virt-manager , virt-viewer in most cases does not require read-write permissions to libvirt. Therefore, it can be used by non-privileged users who should be able to connect to and display guests, but not to configure them. To install the virt-viewer utility, run: Syntax The basic virt-viewer command-line syntax is as follows: The basic virt-viewer command-line syntax is as follows: Connecting to a guest virtual machine If used without any options, virt-viewer lists guests that it can connect to on the default hypervisor of the local system. To connect to a guest virtual machine that uses the default hypervisor: To connect to a guest virtual machine that uses the KVM-QEMU hypervisor: To connect to a remote console using TLS: To connect to a console on a remote host by using SSH, look up the guest configuration and then make a direct non-tunneled connection to the console: Interface By default, the virt-viewer interface provides only the basic tools for interacting with the guest: Figure 17.1. Sample virt-viewer interface Setting hotkeys To create a customized keyboard shortcut (also referred to as a hotkey) for the virt-viewer session, use the --hotkeys option: The following actions can be assigned to a hotkey: toggle-fullscreen release-cursor smartcard-insert smartcard-remove Key-name combination hotkeys are not case-sensitive. Note that the hotkey setting does not carry over to future virt-viewer sessions. Example 17.1. Setting a virt-viewer hotkey To add a hotkey to change to full screen mode when connecting to a KVM-QEMU guest called testguest: Kiosk mode In kiosk mode, virt-viewer only allows the user to interact with the connected desktop, and does not provide any options to interact with the guest settings or the host system unless the guest is shut down. This can be useful for example when an administrator wants to restrict a user's range of actions to a specified guest. To use kiosk mode, connect to a guest with the -k or --kiosk option. Example 17.2. Using virt-viewer in kiosk mode To connect to a KVM-QEMU virtual machine in kiosk mode that terminates after the machine is shut down, use the following command: Note, however, that kiosk mode alone cannot ensure that the user does not interact with the host system or the guest settings after the guest is shut down. This would require further security measures, such as disabling the window manager on the host. | [
"sudo yum install virt-viewer",
"virt-viewer [OPTIONS] {guest-name|id|uuid}",
"virt-viewer guest-name-or-UUID",
"virt-viewer --connect qemu:///system guest-name-or-UUID",
"virt-viewer --connect xen://example.org/ guest-name-or-UUID",
"virt-viewer --direct --connect xen+ssh:// [email protected]/ guest-name-or-UUID",
"virt-viewer --hotkeys= action1 = key-combination1 [, action2 = key-combination2 ] guest-name-or-UUID",
"virt-viewer --hotkeys=toggle-fullscreen=shift+f11 qemu:///system testguest",
"virt-viewer --connect qemu:///system guest-name-or-UUID --kiosk --kiosk-quit on-disconnect"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virt-tools |
Chapter 18. Optimizing virtual machine performance | Chapter 18. Optimizing virtual machine performance Virtual machines (VMs) always experience some degree of performance deterioration in comparison to the host. The following sections explain the reasons for this deterioration and provide instructions on how to minimize the performance impact of virtualization in RHEL 9, so that your hardware infrastructure resources can be used as efficiently as possible. 18.1. What influences virtual machine performance VMs are run as user-space processes on the host. The hypervisor therefore needs to convert the host's system resources so that the VMs can use them. As a consequence, a portion of the resources is consumed by the conversion, and the VM therefore cannot achieve the same performance efficiency as the host. The impact of virtualization on system performance More specific reasons for VM performance loss include: Virtual CPUs (vCPUs) are implemented as threads on the host, handled by the Linux scheduler. VMs do not automatically inherit optimization features, such as NUMA or huge pages, from the host kernel. Disk and network I/O settings of the host might have a significant performance impact on the VM. Network traffic typically travels to a VM through a software-based bridge. Depending on the host devices and their models, there might be significant overhead due to emulation of particular hardware. The severity of the virtualization impact on the VM performance is influenced by a variety factors, which include: The number of concurrently running VMs. The amount of virtual devices used by each VM. The device types used by the VMs. Reducing VM performance loss RHEL 9 provides a number of features you can use to reduce the negative performance effects of virtualization. Notably: The TuneD service can automatically optimize the resource distribution and performance of your VMs. Block I/O tuning can improve the performances of the VM's block devices, such as disks. NUMA tuning can increase vCPU performance. Virtual networking can be optimized in various ways. Important Tuning VM performance can have negative effects on other virtualization functions. For example, it can make migrating the modified VM more difficult. 18.2. Optimizing virtual machine performance by using TuneD The TuneD utility is a tuning profile delivery mechanism that adapts RHEL for certain workload characteristics, such as requirements for CPU-intensive tasks or storage-network throughput responsiveness. It provides a number of tuning profiles that are pre-configured to enhance performance and reduce power consumption in a number of specific use cases. You can edit these profiles or create new profiles to create performance solutions tailored to your environment, including virtualized environments. To optimize RHEL 9 for virtualization, use the following profiles: For RHEL 9 virtual machines, use the virtual-guest profile. It is based on the generally applicable throughput-performance profile, but also decreases the swappiness of virtual memory. For RHEL 9 virtualization hosts, use the virtual-host profile. This enables more aggressive writeback of dirty memory pages, which benefits the host performance. Prerequisites The TuneD service is installed and enabled . Procedure To enable a specific TuneD profile: List the available TuneD profiles. Optional: Create a new TuneD profile or edit an existing TuneD profile. For more information, see Customizing TuneD profiles . Activate a TuneD profile. To optimize a virtualization host, use the virtual-host profile. On a RHEL guest operating system, use the virtual-guest profile. Verification Display the active profile for TuneD . Ensure that the TuneD profile settings have been applied on your system. Additional resources Monitoring and managing system status and performance 18.3. Virtual machine performance optimization for specific workloads Virtual machines (VMs) are frequently dedicated to perform a specific workload. You can improve the performance of your VMs by optimizing their configuration for the intended workload. Table 18.1. Recommended VM configurations for specific use cases Use case IOThread vCPU pinning vNUMA pinning huge pages multi-queue Database For database disks Yes * Yes * Yes * Yes, see: multi-queue virtio-blk, virtio-scsi Virtualized Network Function (VNF) No Yes Yes Yes Yes, see: multi-queue virtio-net High Performance Computing (HPC) No Yes Yes Yes No Backup Server For backup disks No No No Yes, see: multi-queue virtio-blk, virtio-scsi VM with many CPUs (Usually more than 32) No Yes * Yes * No No VM with large RAM (Usually more than 128 GB) No No Yes * Yes No * If the VM has enough CPUs and RAM to use more than one NUMA node. Note A VM can fit in more than one category of use cases. In this situation, you should apply all of the recommended configurations. 18.4. Optimizing libvirt daemons The libvirt virtualization suite works as a management layer for the RHEL hypervisor, and your libvirt configuration significantly impacts your virtualization host. Notably, RHEL 9 contains two different types of libvirt daemons, monolithic or modular, and which type of daemons you use affects how granularly you can configure individual virtualization drivers. 18.4.1. Types of libvirt daemons RHEL 9 supports the following libvirt daemon types: Monolithic libvirt The traditional libvirt daemon, libvirtd , controls a wide variety of virtualization drivers, by using a single configuration file - /etc/libvirt/libvirtd.conf . As such, libvirtd allows for centralized hypervisor configuration, but may use system resources inefficiently. Therefore, libvirtd will become unsupported in a future major release of RHEL. However, if you updated to RHEL 9 from RHEL 8, your host still uses libvirtd by default. Modular libvirt Newly introduced in RHEL 9, modular libvirt provides a specific daemon for each virtualization driver. These include the following: virtqemud - A primary daemon for hypervisor management virtinterfaced - A secondary daemon for host NIC management virtnetworkd - A secondary daemon for virtual network management virtnodedevd - A secondary daemon for host physical device management virtnwfilterd - A secondary daemon for host firewall management virtsecretd - A secondary daemon for host secret management virtstoraged - A secondary daemon for storage management Each of the daemons has a separate configuration file - for example /etc/libvirt/virtqemud.conf . As such, modular libvirt daemons provide better options for fine-tuning libvirt resource management. If you performed a fresh install of RHEL 9, modular libvirt is configured by default. steps If your RHEL 9 uses libvirtd , Red Hat recommends switching to modular daemons. For instructions, see Enabling modular libvirt daemons . 18.4.2. Enabling modular libvirt daemons In RHEL 9, the libvirt library uses modular daemons that handle individual virtualization driver sets on your host. For example, the virtqemud daemon handles QEMU drivers. If you performed a fresh install of a RHEL 9 host, your hypervisor uses modular libvirt daemons by default. However, if you upgraded your host from RHEL 8 to RHEL 9, your hypervisor uses the monolithic libvirtd daemon, which is the default in RHEL 8. If that is the case, Red Hat recommends enabling the modular libvirt daemons instead, because they provide better options for fine-tuning libvirt resource management. In addition, libvirtd will become unsupported in a future major release of RHEL. Prerequisites Your hypervisor is using the monolithic libvirtd service. If this command displays active , you are using libvirtd . Your virtual machines are shut down. Procedure Stop libvirtd and its sockets. Disable libvirtd to prevent it from starting on boot. Enable the modular libvirt daemons. Start the sockets for the modular daemons. Optional: If you require connecting to your host from remote hosts, enable and start the virtualization proxy daemon. Check whether the libvirtd-tls.socket service is enabled on your system. If libvirtd-tls.socket is not enabled ( listen_tls = 0 ), activate virtproxyd as follows: If libvirtd-tls.socket is enabled ( listen_tls = 1 ), activate virtproxyd as follows: To enable the TLS socket of virtproxyd , your host must have TLS certificates configured to work with libvirt . For more information, see the Upstream libvirt documentation . Verification Activate the enabled virtualization daemons. Verify that your host is using the virtqemud modular daemon. If the status is active , you have successfully enabled modular libvirt daemons. 18.5. Configuring virtual machine memory To improve the performance of a virtual machine (VM), you can assign additional host RAM to the VM. Similarly, you can decrease the amount of memory allocated to a VM so the host memory can be allocated to other VMs or tasks. To perform these actions, you can use the web console or the command line . 18.5.1. Memory overcommitment Virtual machines (VMs) running on a KVM hypervisor do not have dedicated blocks of physical RAM assigned to them. Instead, each VM functions as a Linux process where the host's Linux kernel allocates memory only when requested. In addition, the host's memory manager can move the VM's memory between its own physical memory and swap space. If memory overcommitment is enabled, the kernel can decide to allocate less physical memory than is requested by a VM, because often the requested amount of memory is not fully used by the VM's process. By default, memory overcommitment is enabled in the Linux kernel and the kernel estimates the safe amount of memory overcommitment for VM's requests. However, the system can still become unstable with frequent overcommitment for memory-intensive workloads. Memory overcommitment requires you to allocate sufficient swap space on the host physical machine to accommodate all VMs as well as enough memory for the host physical machine's processes. For instructions on the basic recommended swap space size, see: What is the recommended swap size for Red Hat platforms? Recommended methods to deal with memory shortages on the host: Allocate less memory per VM. Add more physical memory to the host. Use larger swap space. Important A VM will run slower if it is swapped frequently. In addition, overcommitting can cause the system to run out of memory (OOM), which may lead to the Linux kernel shutting down important system processes. Memory overcommit is not supported with device assignment. This is because when device assignment is in use, all virtual machine memory must be statically pre-allocated to enable direct memory access (DMA) with the assigned device. Additional resources Virtual memory parameters 18.5.2. Adding and removing virtual machine memory by using the web console To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the web console to adjust amount of memory allocated to the VM. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The guest OS is running the memory balloon drivers. To verify this is the case: Ensure the VM's configuration includes the memballoon device: If this commands displays any output and the model is not set to none , the memballoon device is present. Ensure the balloon drivers are running in the guest OS. In Windows guests, the drivers are installed as a part of the virtio-win driver package. For instructions, see Installing KVM paravirtualized drivers for Windows virtual machines . In Linux guests, the drivers are generally included by default and activate when the memballoon device is present. The web console VM plug-in is installed on your system . Procedure Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification. Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM whose information you want to see. A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM's graphical interface. Click edit to the Memory line in the Overview pane. The Memory Adjustment dialog appears. Configure the virtual memory for the selected VM. Maximum allocation - Sets the maximum amount of host memory that the VM can use for its processes. You can specify the maximum memory when creating the VM or increase it later. You can specify memory as multiples of MiB or GiB. Adjusting maximum memory allocation is only possible on a shut-off VM. Current allocation - Sets the actual amount of memory allocated to the VM. This value can be less than the Maximum allocation but cannot exceed it. You can adjust the value to regulate the memory available to the VM for its processes. You can specify memory as multiples of MiB or GiB. If you do not specify this value, the default allocation is the Maximum allocation value. Click Save . The memory allocation of the VM is adjusted. Additional resources Adding and removing virtual machine memory by using the command line Optimizing virtual machine CPU performance 18.5.3. Adding and removing virtual machine memory by using the command line To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the CLI to adjust amount of memory allocated to the VM. Prerequisites The guest OS is running the memory balloon drivers. To verify this is the case: Ensure the VM's configuration includes the memballoon device: If this commands displays any output and the model is not set to none , the memballoon device is present. Ensure the ballon drivers are running in the guest OS. In Windows guests, the drivers are installed as a part of the virtio-win driver package. For instructions, see Installing KVM paravirtualized drivers for Windows virtual machines . In Linux guests, the drivers are generally included by default and activate when the memballoon device is present. Procedure Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification. Adjust the maximum memory allocated to a VM. Increasing this value improves the performance potential of the VM, and reducing the value lowers the performance footprint the VM has on your host. Note that this change can only be performed on a shut-off VM, so adjusting a running VM requires a reboot to take effect. For example, to change the maximum memory that the testguest VM can use to 4096 MiB: To increase the maximum memory of a running VM, you can attach a memory device to the VM. This is also referred to as memory hot plug . For details, see Attaching memory devices to virtual machines. Warning Removing memory devices from a running VM (also referred as a memory hot unplug) is not supported, and highly discouraged by Red Hat. Optional: You can also adjust the memory currently used by the VM, up to the maximum allocation. This regulates the memory load that the VM has on the host until the reboot, without changing the maximum VM allocation. Verification Confirm that the memory used by the VM has been updated: Optional: If you adjusted the current VM memory, you can obtain the memory balloon statistics of the VM to evaluate how effectively it regulates its memory use. Additional resources Adding and removing virtual machine memory by using the web console Optimizing virtual machine CPU performance 18.5.4. Configuring virtual machines to use huge pages In certain use cases, you can improve memory allocation for your virtual machines (VMs) by using huge pages instead of the default 4 KiB memory pages. For example, huge pages can improve performance for VMs with high memory utilization, such as database servers. Prerequisites The host is configured to use huge pages in memory allocation. For instructions, see: Configuring HugeTLB at boot time Procedure Shut down the selected VM if it is running. To configure a VM to use 1 GiB huge pages, open the XML definition of a VM for editing. For example, to edit a testguest VM, run the following command: Add the following lines to the <memoryBacking> section in the XML definition: <memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking> Verification Start the VM. Confirm that the host has successfully allocated huge pages for the running VM. On the host, run the following command: When you add together the number of free and reserved huge pages ( HugePages_Free + HugePages_Rsvd ), the result should be less than the total number of huge pages ( HugePages_Total ). The difference is the number of huge pages that is used by the running VM. Additional resources Configuring huge pages 18.5.5. Adding and removing virtual machine memory by using virtio-mem RHEL 9 provides the virtio-mem paravirtualized memory device. This device makes it possible to dynamically add or remove host memory in virtual machines (VMs). For example, you can use virtio-mem to move memory resources between running VMs or to resize VM memory in cloud setups based on your current requirements. 18.5.5.1. Overview of virtio-mem virtio-mem is a paravirtualized memory device that can be used to dynamically add or remove host memory in virtual machines (VMs). For example, you can use this device to move memory resources between running VMs or to resize VM memory in cloud setups based on your current requirements. By using virtio-mem , you can increase the memory of a VM beyond its initial size, and shrink it back to its original size, in units that can have the size of 4 to several hundred mebibytes (MiBs). Note, however, that virtio-mem also relies on a specific guest operating system configuration, especially to reliably unplug memory. virtio-mem feature limitations virtio-mem is currently not compatible with the following features: Using memory locking for real-time applications on the host Using encrypted virtualization on the host Combining virtio-mem with memballoon inflation and deflation on the host Unloading or reloading the virtio_mem driver in a VM Using vhost-user devices, with the exception of virtiofs Additional resources Configuring memory onlining in virtual machines Attaching a virtio-mem device to virtual machines 18.5.5.2. Configuring memory onlining in virtual machines Before using virtio-mem to attach memory to a running virtual machine (also known as memory hot-plugging), you must configure the virtual machine (VM) operating system to automatically set the hot-plugged memory to an online state. Otherwise, the guest operating system is not able to use the additional memory. You can choose from one of the following configurations for memory onlining: online_movable online_kernel auto-movable To learn about differences between these configurations, see: Comparison of memory onlining configurations Memory onlining is configured with udev rules by default in RHEL. However, when using virtio-mem , it is recommended to configure memory onlining directly in the kernel. Prerequisites The host uses the Intel 64, AMD64, or ARM 64 CPU architecture. The host uses RHEL 9.4 or later as the operating system. VMs running on the host use one of the following operating system versions: RHEL 8.10 Important Unplugging memory from a running VM is disabled by default in RHEL 8.10 VMs. RHEL 9 Procedure To set memory onlining to use the online_movable configuration in the VM: Set the memhp_default_state kernel command line parameter to online_movable : Reboot the VM. To set memory onlining to use the online_kernel configuration in the VM: Set the memhp_default_state kernel command line parameter to online_kernel : Reboot the VM. To use the auto-movable memory onlining policy in the VM: Set the memhp_default_state kernel command line parameter to online : Set the memory_hotplug.online_policy kernel command line parameter to auto-movable : Optional: To further tune the auto-movable onlining policy, change the memory_hotplug.auto_movable_ratio and memory_hotplug.auto_movable_numa_aware parameters: The memory_hotplug.auto_movable_ratio parameter sets the maximum ratio of memory only available for movable allocations compared to memory available for any allocations. The ratio is expressed in percents and the default value is: 301 (%), which is a 3:1 ratio. The memory_hotplug.auto_movable_numa_aware parameter controls whether the memory_hotplug.auto_movable_ratio parameter applies to memory across all available NUMA nodes or only for memory within a single NUMA node. The default value is: y (yes) For example, if the maximum ratio is set to 301% and the memory_hotplug.auto_movable_numa_aware is set to y (yes), than the 3:1 ratio is applied even within the NUMA node with the attached virtio-mem device. If the parameter is set to n (no), the maximum 3:1 ratio is applied only for all the NUMA nodes as a whole. Additionally, if the ratio is not exceeded, the newly hot-plugged memory will be available only for movable allocations. Otherwise, the newly hot-plugged memory will be available for both movable and unmovable allocations. Reboot the VM. Verification To see if the online_movable configuration has been set correctly, check the current value of the memhp_default_state kernel parameter: To see if the online_kernel configuration has been set correctly, check the current value of the memhp_default_state kernel parameter: To see if the auto-movable configuration has been set correctly, check the following kernel parameters: memhp_default_state : memory_hotplug.online_policy : memory_hotplug.auto_movable_ratio : memory_hotplug.auto_movable_numa_aware : Additional resources Overview of virtio-mem Attaching a virtio-mem device to virtual machines Configuring Memory Hot(Un)Plug 18.5.5.3. Attaching a virtio-mem device to virtual machines To attach additional memory to a running virtual machine (also known as memory hot-plugging) and afterwards be able to resize the hot-plugged memory, you can use a virtio-mem device. Specifically, you can use libvirt XML configuration files and virsh commands to define and attach virtio-mem devices to virtual machines (VMs). Prerequisites The host uses the Intel 64, AMD64, or ARM 64 CPU architecture. The host uses RHEL 9.4 or later as the operating system. VMs running on the host use one of the following operating system versions: RHEL 8.10 Important Unplugging memory from a running VM is disabled by default in RHEL 8.10 VMs. RHEL 9 The VM has memory onlining configured. For instructions, see: Configuring memory onlining in virtual machines Procedure Ensure the XML configuration of the target VM includes the maxMemory parameter: In this example, the XML configuration of the testguest1 VM defines a maxMemory parameter with a 128 gibibyte (GiB) size. The maxMemory size specifies the maximum memory the VM can use, which includes both initial and hot-plugged memory. Create and open an XML file to define virtio-mem devices on the host, for example: Add XML definitions of virtio-mem devices to the file and save it: <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>0</node> <block unit='MiB'>2</block> <requested unit='GiB'>16</requested> <current unit='GiB'>16</current> </target> <alias name='ua-virtiomem0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memory> <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>1</node> <block unit='MiB'>2</block> <requested unit='GiB'>0</requested> <current unit='GiB'>0</current> </target> <alias name='ua-virtiomem1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memory> In this example, two virtio-mem devices are defined with the following parameters: size : This is the maximum size of the device. In the example, it is 48 GiB. The size must be a multiple of the block size. node : This is the assigned vNUMA node for the virtio-mem device. block : This is the block size of the device. It must be at least the size of the Transparent Huge Page (THP), which is 2 MiB on Intel 64 and AMD64 CPU architecture. On ARM64 architecture, the size of THP can be 2 MiB or 512 MiB depending on the base page size. The 2 MiB block size on Intel 64 or AMD64 architecture is usually a good default choice. When using virtio-mem with Virtual Function I/O (VFIO) or mediated devices (mdev) , the total number of blocks across all virtio-mem devices must not be larger than 32768, otherwise the plugging of RAM might fail. requested : This is the amount of memory you attach to the VM with the virtio-mem device. However, it is just a request towards the VM and it might not be resolved successfully, for example if the VM is not properly configured. The requested size must be a multiple of the block size and cannot exceed the maximum defined size . current : This represents the current size the virtio-mem device provides to the VM. The current size can differ from the requested size, for example when requests cannot be completed or when rebooting the VM. alias : This is an optional user-defined alias that you can use to specify the intended virtio-mem device, for example when editing the device with libvirt commands. All user-defined aliases in libvirt must start with the "ua-" prefix. Apart from these specific parameters, libvirt handles the virtio-mem device like any other PCI device. For more information on managing PCI devices attached to VMs, see: Managing virtual devices Use the XML file to attach the defined virtio-mem devices to a VM. For example, to permanently attach the two devices defined in the virtio-mem-device.xml to the running testguest1 VM: The --live option attaches the device to a running VM only, without persistence between boots. The --config option makes the configuration changes persistent. You can also attach the device to a shutdown VM without the --live option. Optional: To dynamically change the requested size of a virtio-mem device attached to a running VM, use the virsh update-memory-device command: In this example: testguest1 is the VM you want to update. --alias ua-virtiomem0 is the virtio-mem device specified by a previously defined alias. --requested-size 4GiB changes the requested size of the virtio-mem device to 4 GiB. Warning Unplugging memory from a running VM by reducing the requested size might be unreliable. Whether this process succeeds depends on various factors, such as the memory onlining policy that is used. In some cases, the guest operating system cannot complete the request successfully, because changing the amount of hot-plugged memory is not possible at that time. Additionally, unplugging memory from a running VM is disabled by default in RHEL 8.10 VMs. Optional: To unplug a virtio-mem device from a shut-down VM, use the virsh detach-device command: Optional: To unplug a virtio-mem device from a running VM: Change the requested size of the virtio-mem device to 0, otherwise the attempt to unplug a virtio-mem device from a running VM will fail. Unplug a virtio-mem device from the running VM: Verification In the VM, check the available RAM and see if the total amount now includes the hot-plugged memory: The current amount of plugged-in RAM can be also viewed on the host by displaying the XML configuration of the running VM: In this example: <currentMemory unit='GiB'>31</currentMemory> represents the total RAM available in the VM from all sources. <current unit='GiB'>16</current> represents the current size of the plugged-in RAM provided by the virtio-mem device. Additional resources Overview of virtio-mem Configuring memory onlining in virtual machines 18.5.5.4. Comparison of memory onlining configurations When attaching memory to a running RHEL virtual machine (also known as memory hot-plugging), you must set the hot-plugged memory to an online state in the virtual machine (VM) operating system. Otherwise, the system will not be able to use the memory. The following table summarizes the main considerations when choosing between the available memory onlining configurations. Table 18.2. Comparison of memory onlining configurations Configuration name Unplugging memory from a VM A risk of creating a memory zone imbalance A potential use case Memory requirements of the intended workload online_movable Hot-plugged memory can be reliably unplugged. Yes Hot-plugging a comparatively small amount of memory Mostly user-space memory auto-movable Movable portions of hot-plugged memory can be reliably unplugged. Minimal Hot-plugging a large amount of memory Mostly user-space memory online_kernel Hot-plugged memory cannot be reliably unplugged. No Unreliable memory unplugging is acceptable. User-space or kernel-space memory A zone imbalance is a lack of available memory pages in one of the Linux memory zones. A zone imbalance can negatively impact the system performance. For example, the kernel might crash if it runs out of free memory for unmovable allocations. Usually, movable allocations contain mostly user-space memory pages and unmovable allocations contain mostly kernel-space memory pages. Additional resources Onlining and Offlining Memory Blocks Zone Imbalances Configuring memory onlining in virtual machines 18.5.6. Additional resources Attaching devices to virtual machines . 18.6. Optimizing virtual machine I/O performance The input and output (I/O) capabilities of a virtual machine (VM) can significantly limit the VM's overall efficiency. To address this, you can optimize a VM's I/O by configuring block I/O parameters. 18.6.1. Tuning block I/O in virtual machines When multiple block devices are being used by one or more VMs, it might be important to adjust the I/O priority of specific virtual devices by modifying their I/O weights . Increasing the I/O weight of a device increases its priority for I/O bandwidth, and therefore provides it with more host resources. Similarly, reducing a device's weight makes it consume less host resources. Note Each device's weight value must be within the 100 to 1000 range. Alternatively, the value can be 0 , which removes that device from per-device listings. Procedure To display and set a VM's block I/O parameters: Display the current <blkio> parameters for a VM: # virsh dumpxml VM-name <domain> [...] <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> [...] </domain> Edit the I/O weight of a specified device: For example, the following changes the weight of the /dev/sda device in the testguest1 VM to 500. Verification Check that the VM's block I/O parameters have been configured correctly. Important Certain kernels do not support setting I/O weights for specific devices. If the step does not display the weights as expected, it is likely that this feature is not compatible with your host kernel. 18.6.2. Disk I/O throttling in virtual machines When several VMs are running simultaneously, they can interfere with system performance by using excessive disk I/O. Disk I/O throttling in KVM virtualization provides the ability to set a limit on disk I/O requests sent from the VMs to the host machine. This can prevent a VM from over-utilizing shared resources and impacting the performance of other VMs. To enable disk I/O throttling, set a limit on disk I/O requests sent from each block device attached to VMs to the host machine. Procedure Use the virsh domblklist command to list the names of all the disk devices on a specified VM. Find the host block device where the virtual disk that you want to throttle is mounted. For example, if you want to throttle the sdb virtual disk from the step, the following output shows that the disk is mounted on the /dev/nvme0n1p3 partition. Set I/O limits for the block device by using the virsh blkiotune command. The following example throttles the sdb disk on the rollin-coal VM to 1000 read and write I/O operations per second and to 50 MB per second read and write throughput. Additional information Disk I/O throttling can be useful in various situations, for example when VMs belonging to different customers are running on the same host, or when quality of service guarantees are given for different VMs. Disk I/O throttling can also be used to simulate slower disks. I/O throttling can be applied independently to each block device attached to a VM and supports limits on throughput and I/O operations. Red Hat does not support using the virsh blkdeviotune command to configure I/O throttling in VMs. For more information about unsupported features when using RHEL 9 as a VM host, see Unsupported features in RHEL 9 virtualization . 18.6.3. Enabling multi-queue on storage devices When using virtio-blk or virtio-scsi storage devices in your virtual machines (VMs), the multi-queue feature provides improved storage performance and scalability. It enables each virtual CPU (vCPU) to have a separate queue and interrupt to use without affecting other vCPUs. The multi-queue feature is enabled by default for the Q35 machine type, however you must enable it manually on the i440FX machine type. You can tune the number of queues to be optimal for your workload, however the optimal number differs for each type of workload and you must test which number of queues works best in your case. Procedure To enable multi-queue on a storage device, edit the XML configuration of the VM. In the XML configuration, find the intended storage device and change the queues parameter to use multiple I/O queues. Replace N with the number of vCPUs in the VM, up to 16. A virtio-blk example: <disk type='block' device='disk'> <driver name='qemu' type='raw' queues='N' /> <source dev='/dev/sda'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> A virtio-scsi example: <controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller> Restart the VM for the changes to take effect. 18.6.4. Configuring dedicated IOThreads To improve the Input/Output (IO) performance of a disk on your virtual machine (VM), you can configure a dedicated IOThread that is used to manage the IO operations of the VM's disk. Normally, the IO operations of a disk are a part of the main QEMU thread, which can decrease the responsiveness of the VM as a whole during intensive IO workloads. By separating the IO operations to a dedicated IOThread , you can significantly increase the responsiveness and performance of your VM. Procedure Shut down the selected VM if it is running. On the host, add or edit the <iothreads> tag in the XML configuration of the VM. For example, to create a single IOThread for a testguest1 VM: Note For optimal results, use only 1-2 IOThreads per CPU on the host. Assign a dedicated IOThread` to a VM disk. For example, to assign an IOThread with ID of 1 to a disk on the testguest1 VM: Note IOThread IDs start from 1 and you must dedicate only a single IOThread to a disk. Usually, a one dedicated IOThread per VM is sufficient for optimal performance. When using virtio-scsi storage devices, assign a dedicated IOThread` to the virtio-scsi controller. For example, to assign an IOThread with ID of 1 to a controller on the testguest1 VM: Verification Evaluate the impact of your changes on your VM performance. For details, see: Virtual machine performance monitoring tools 18.6.5. Configuring virtual disk caching KVM provides several virtual disk caching modes. For intensive Input/Output (IO) workloads, selecting the optimal caching mode can significantly increase the virtual machine (VM) performance. Virtual disk cache modes overview writethrough Host page cache is used for reading only. Writes are reported as completed only when the data has been committed to the storage device. The sustained IO performance is decreased but this mode has good write guarantees. writeback Host page cache is used for both reading and writing. Writes are reported as complete when data reaches the host's memory cache, not physical storage. This mode has faster IO performance than writethrough but it is possible to lose data on host failure. none Host page cache is bypassed entirely. This mode relies directly on the write queue of the physical disk, so it has a predictable sustained IO performance and offers good write guarantees on a stable guest. It is also a safe cache mode for VM live migration. Procedure Shut down the selected VM if it is running. Edit the XML configuration of the selected VM. Find the disk device and edit the cache option in the driver tag. <domain type='kvm'> <name>testguest1</name> ... <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1'/> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> ... </devices> ... </domain> 18.7. Optimizing virtual machine CPU performance Much like physical CPUs in host machines, vCPUs are critical to virtual machine (VM) performance. As a result, optimizing vCPUs can have a significant impact on the resource efficiency of your VMs. To optimize your vCPU: Adjust how many host CPUs are assigned to the VM. You can do this using the CLI or the web console . Ensure that the vCPU model is aligned with the CPU model of the host. For example, to set the testguest1 VM to use the CPU model of the host: On an ARM 64 system, use --cpu host-passthrough . Manage kernel same-page merging (KSM) . If your host machine uses Non-Uniform Memory Access (NUMA), you can also configure NUMA for its VMs. This maps the host's CPU and memory processes onto the CPU and memory processes of the VM as closely as possible. In effect, NUMA tuning provides the vCPU with a more streamlined access to the system memory allocated to the VM, which can improve the vCPU processing effectiveness. For details, see Configuring NUMA in a virtual machine and Virtual machine performance optimization for specific workloads . 18.7.1. vCPU overcommitment vCPU overcommitment allows you to have a setup where the sum of all vCPUs in virtual machines (VMs) running on a host exceeds the number of physical CPUs on the host. However, you might experience performance deterioration when simultaneously running more cores in your VMs than are physically available on the host. For best performance, assign VMs with only as many vCPUs as are required to run the intended workloads in each VM. vCPU overcommitment recommendations: Assign the minimum amount of vCPUs required by by the VM's workloads for best performance. Avoid overcommitting vCPUs in production without extensive testing. If overcomitting vCPUs, the safe ratio is typically 5 vCPUs to 1 physical CPU for loads under 100%. It is not recommended to have more than 10 total allocated vCPUs per physical processor core. Monitor CPU usage to prevent performance degradation under heavy loads. Important Applications that use 100% of memory or processing resources may become unstable in overcommitted environments. Do not overcommit memory or CPUs in a production environment without extensive testing, as the CPU overcommit ratio is workload-dependent. 18.7.2. Adding and removing virtual CPUs by using the command line To increase or optimize the CPU performance of a virtual machine (VM), you can add or remove virtual CPUs (vCPUs) assigned to the VM. When performed on a running VM, this is also referred to as vCPU hot plugging and hot unplugging. However, note that vCPU hot unplug is not supported in RHEL 9, and Red Hat highly discourages its use. Prerequisites Optional: View the current state of the vCPUs in the targeted VM. For example, to display the number of vCPUs on the testguest VM: This output indicates that testguest is currently using 1 vCPU, and 1 more vCPu can be hot plugged to it to increase the VM's performance. However, after reboot, the number of vCPUs testguest uses will change to 2, and it will be possible to hot plug 2 more vCPUs. Procedure Adjust the maximum number of vCPUs that can be attached to a VM, which takes effect on the VM's boot. For example, to increase the maximum vCPU count for the testguest VM to 8: Note that the maximum may be limited by the CPU topology, host hardware, the hypervisor, and other factors. Adjust the current number of vCPUs attached to a VM, up to the maximum configured in the step. For example: To increase the number of vCPUs attached to the running testguest VM to 4: This increases the VM's performance and host load footprint of testguest until the VM's boot. To permanently decrease the number of vCPUs attached to the testguest VM to 1: This decreases the VM's performance and host load footprint of testguest after the VM's boot. However, if needed, additional vCPUs can be hot plugged to the VM to temporarily increase its performance. Verification Confirm that the current state of vCPU for the VM reflects your changes. Additional resources Managing virtual CPUs by using the web console 18.7.3. Managing virtual CPUs by using the web console By using the RHEL 9 web console, you can review and configure virtual CPUs used by virtual machines (VMs) to which the web console is connected. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM whose information you want to see. A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM's graphical interface. Click edit to the number of vCPUs in the Overview pane. The vCPU details dialog appears. Configure the virtual CPUs for the selected VM. vCPU Count - The number of vCPUs currently in use. Note The vCPU count cannot be greater than the vCPU Maximum. vCPU Maximum - The maximum number of virtual CPUs that can be configured for the VM. If this value is higher than the vCPU Count , additional vCPUs can be attached to the VM. Sockets - The number of sockets to expose to the VM. Cores per socket - The number of cores for each socket to expose to the VM. Threads per core - The number of threads for each core to expose to the VM. Note that the Sockets , Cores per socket , and Threads per core options adjust the CPU topology of the VM. This may be beneficial for vCPU performance and may impact the functionality of certain software in the guest OS. If a different setting is not required by your deployment, keep the default values. Click Apply . The virtual CPUs for the VM are configured. Note Changes to virtual CPU settings only take effect after the VM is restarted. Additional resources Adding and removing virtual CPUs by using the command line 18.7.4. Configuring NUMA in a virtual machine The following methods can be used to configure Non-Uniform Memory Access (NUMA) settings of a virtual machine (VM) on a RHEL 9 host. For ease of use, you can set up a VM's NUMA configuration by using automated utilities and services. However, manual NUMA setup is more likely to yield a significant performance improvement. Prerequisites The host is a NUMA-compatible machine. To detect whether this is the case, use the virsh nodeinfo command and see the NUMA cell(s) line: If the value of the line is 2 or greater, the host is NUMA-compatible. Optional: You have the numactl package installed on the host. Procedure Automatic methods Set the VM's NUMA policy to Preferred . For example, to configure the testguest5 VM: Use the numad service to automatically align the VM CPU with memory resources. Start the numad service to automatically align the VM CPU with memory resources. Manual methods To manually tune NUMA settings, you can specify which host NUMA nodes will be assigned specifically to a certain VM. This can improve the host memory usage by the VM's vCPU. Optional: Use the numactl command to view the NUMA topology on the host: Edit the XML configuration of a VM to assign CPU and memory resources to specific NUMA nodes. For example, the following configuration sets testguest6 to use vCPUs 0-7 on NUMA node 0 and vCPUS 8-15 on NUMA node 1 . Both nodes are also assigned 16 GiB of VM's memory. If the VM is running, restart it to apply the configuration. Note For best performance results, it is recommended to respect the maximum memory size for each NUMA node on the host. Known issues NUMA tuning currently cannot be performed on IBM Z hosts . Additional resources Virtual machine performance optimization for specific workloads Virtual machine performance optimization for specific workloads using the numastat utility 18.7.5. Configuring virtual CPU pinning To improve the CPU performance of a virtual machine (VM), you can pin a virtual CPU (vCPU) to a specific physical CPU thread on the host. This ensures that the vCPU will have its own dedicated physical CPU thread, which can significantly improve the vCPU performance. To further optimize the CPU performance, you can also pin QEMU process threads associated with a specified VM to a specific host CPU. Procedure Check the CPU topology on the host: In this example, the output contains NUMA nodes and the available physical CPU threads on the host. Check the number of vCPU threads inside the VM: In this example, the output contains NUMA nodes and the available vCPU threads inside the VM. Pin specific vCPU threads from a VM to a specific host CPU or range of CPUs. This is recommended as a safe method of vCPU performance improvement. For example, the following commands pin vCPU threads 0 to 3 of the testguest6 VM to host CPUs 1, 3, 5, 7, respectively: Optional: Verify whether the vCPU threads are successfully pinned to CPUs. After pinning vCPU threads, you can also pin QEMU process threads associated with a specified VM to a specific host CPU or range of CPUs. This can further help the QEMU process to run more efficiently on the physical CPU. For example, the following commands pin the QEMU process thread of testguest6 to CPUs 2 and 4, and verify this was successful: 18.7.6. Configuring virtual CPU capping You can use virtual CPU (vCPU) capping to limit the amount of CPU resources a virtual machine (VM) can use. vCPU capping can improve the overall performance by preventing excessive use of host's CPU resources by a single VM and by making it easier for the hypervisor to manage CPU scheduling. Procedure View the current vCPU scheduling configuration on the host. To configure an absolute vCPU cap for a VM, set the vcpu_period and vcpu_quota parameters. Both parameters use a numerical value that represents a time duration in microseconds. Set the vcpu_period parameter by using the virsh schedinfo command. For example: In this example, the vcpu_period is set to 100,000 microseconds, which means the scheduler enforces vCPU capping during this time interval. You can also use the --live --config options to configure a running VM without restarting it. Set the vcpu_quota parameter by using the virsh schedinfo command. For example: In this example, the vcpu_quota is set to 50,000 microseconds, which specifies the maximum amount of CPU time that the VM can use during the vcpu_period time interval. In this case, vcpu_quota is set as the half of vcpu_period , so the VM can use up to 50% of the CPU time during that interval. You can also use the --live --config options to configure a running VM without restarting it. Verification Check that the vCPU scheduling parameters have the correct values. 18.7.7. Tuning CPU weights The CPU weight (or CPU shares ) setting controls how much CPU time a virtual machine (VM) receives compared to other running VMs. By increasing the CPU weight of a specific VM, you can ensure that this VM gets more CPU time relative to other VMs. To prioritize CPU time allocation between multiple VMs, set the cpu_shares parameter The possible CPU weight values range from 0 to 262144 and the default value for a new KVM VM is 1024 . Procedure Check the current CPU weight of a VM. Adjust the CPU weight to a preferred value. In this example, cpu_shares is set to 2048. This means that if all other VMs have the value set to 1024, this VM gets approximately twice the amount of CPU time. You can also use the --live --config options to configure a running VM without restarting it. 18.7.8. Enabling and disabling kernel same-page merging Kernel Same-Page Merging (KSM) improves memory density by sharing identical memory pages between virtual machines (VMs). Therefore, enabling KSM might improve memory efficiency of your VM deployment. However, enabling KSM also increases CPU utilization, and might negatively affect overall performance depending on the workload. In RHEL 9 and later, KSM is disabled by default. To enable KSM and test its impact on your VM performance, see the following instructions. Prerequisites Root access to your host system. Procedure Enable KSM: Warning Enabling KSM increases CPU utilization and affects overall CPU performance. Install the ksmtuned service: Start the service: To enable KSM for a single session, use the systemctl utility to start the ksm and ksmtuned services. To enable KSM persistently, use the systemctl utility to enable the ksm and ksmtuned services. Monitor the performance and resource consumption of VMs on your host to evaluate the benefits of activating KSM. Specifically, ensure that the additional CPU usage by KSM does not offset the memory improvements and does not cause additional performance issues. In latency-sensitive workloads, also pay attention to cross-NUMA page merges. Optional: If KSM has not improved your VM performance, disable it: To disable KSM for a single session, use the systemctl utility to stop ksm and ksmtuned services. To disable KSM persistently, use the systemctl utility to disable ksm and ksmtuned services. Note Memory pages shared between VMs before deactivating KSM will remain shared. To stop sharing, delete all the PageKSM pages in the system by using the following command: However, this command increases memory usage, and might cause performance problems on your host or your VMs. 18.8. Optimizing virtual machine network performance Due to the virtual nature of a VM's network interface controller (NIC), the VM loses a portion of its allocated host network bandwidth, which can reduce the overall workload efficiency of the VM. The following tips can minimize the negative impact of virtualization on the virtual NIC (vNIC) throughput. Procedure Use any of the following methods and observe if it has a beneficial effect on your VM network performance: Enable the vhost_net module On the host, ensure the vhost_net kernel feature is enabled: If the output of this command is blank, enable the vhost_net kernel module: Set up multi-queue virtio-net To set up the multi-queue virtio-net feature for a VM, use the virsh edit command to edit to the XML configuration of the VM. In the XML, add the following to the <devices> section, and replace N with the number of vCPUs in the VM, up to 16: If the VM is running, restart it for the changes to take effect. Batching network packets In Linux VM configurations with a long transmission path, batching packets before submitting them to the kernel may improve cache utilization. To set up packet batching, use the following command on the host, and replace tap0 with the name of the network interface that the VMs use: SR-IOV If your host NIC supports SR-IOV, use SR-IOV device assignment for your vNICs. For more information, see Managing SR-IOV devices . Additional resources Understanding virtual networking 18.9. Virtual machine performance monitoring tools To identify what consumes the most VM resources and which aspect of VM performance needs optimization, performance diagnostic tools, both general and VM-specific, can be used. Default OS performance monitoring tools For standard performance evaluation, you can use the utilities provided by default by your host and guest operating systems: On your RHEL 9 host, as root, use the top utility or the system monitor application, and look for qemu and virt in the output. This shows how much host system resources your VMs are consuming. If the monitoring tool displays that any of the qemu or virt processes consume a large portion of the host CPU or memory capacity, use the perf utility to investigate. For details, see below. In addition, if a vhost_net thread process, named for example vhost_net-1234 , is displayed as consuming an excessive amount of host CPU capacity, consider using virtual network optimization features , such as multi-queue virtio-net . On the guest operating system, use performance utilities and applications available on the system to evaluate which processes consume the most system resources. On Linux systems, you can use the top utility. On Windows systems, you can use the Task Manager application. perf kvm You can use the perf utility to collect and analyze virtualization-specific statistics about the performance of your RHEL 9 host. To do so: On the host, install the perf package: Use one of the perf kvm stat commands to display perf statistics for your virtualization host: For real-time monitoring of your hypervisor, use the perf kvm stat live command. To log the perf data of your hypervisor over a period of time, activate the logging by using the perf kvm stat record command. After the command is canceled or interrupted, the data is saved in the perf.data.guest file, which can be analyzed by using the perf kvm stat report command. Analyze the perf output for types of VM-EXIT events and their distribution. For example, the PAUSE_INSTRUCTION events should be infrequent, but in the following output, the high occurrence of this event suggests that the host CPUs are not handling the running vCPUs well. In such a scenario, consider shutting down some of your active VMs, removing vCPUs from these VMs, or tuning the performance of the vCPUs . Other event types that can signal problems in the output of perf kvm stat include: INSN_EMULATION - suggests suboptimal VM I/O configuration . For more information about using perf to monitor virtualization performance, see the perf-kvm man page on your system. numastat To see the current NUMA configuration of your system, you can use the numastat utility, which is provided by installing the numactl package. The following shows a host with 4 running VMs, each obtaining memory from multiple NUMA nodes. This is not optimal for vCPU performance, and warrants adjusting : In contrast, the following shows memory being provided to each VM by a single node, which is significantly more efficient. 18.10. Additional resources Optimizing Windows virtual machines | [
"tuned-adm list Available profiles: - balanced - General non-specialized TuneD profile - desktop - Optimize for the desktop use-case [...] - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: balanced",
"tuned-adm profile selected-profile",
"tuned-adm profile virtual-host",
"tuned-adm profile virtual-guest",
"tuned-adm active Current active profile: virtual-host",
"tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.",
"systemctl is-active libvirtd.service active",
"systemctl stop libvirtd.service systemctl stop libvirtd{,-ro,-admin,-tcp,-tls}.socket",
"systemctl disable libvirtd.service systemctl disable libvirtd{,-ro,-admin,-tcp,-tls}.socket",
"for drv in qemu interface network nodedev nwfilter secret storage; do systemctl unmask virtUSD{drv}d.service; systemctl unmask virtUSD{drv}d{,-ro,-admin}.socket; systemctl enable virtUSD{drv}d.service; systemctl enable virtUSD{drv}d{,-ro,-admin}.socket; done",
"for drv in qemu network nodedev nwfilter secret storage; do systemctl start virtUSD{drv}d{,-ro,-admin}.socket; done",
"grep listen_tls /etc/libvirt/libvirtd.conf listen_tls = 0",
"systemctl unmask virtproxyd.service systemctl unmask virtproxyd{,-ro,-admin}.socket systemctl enable virtproxyd.service systemctl enable virtproxyd{,-ro,-admin}.socket systemctl start virtproxyd{,-ro,-admin}.socket",
"systemctl unmask virtproxyd.service systemctl unmask virtproxyd{,-ro,-admin,-tls}.socket systemctl enable virtproxyd.service systemctl enable virtproxyd{,-ro,-admin,-tls}.socket systemctl start virtproxyd{,-ro,-admin,-tls}.socket",
"virsh uri qemu:///system",
"systemctl is-active virtqemud.service active",
"virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>",
"virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB",
"virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>",
"virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB",
"virt-xml testguest --edit --memory memory=4096,currentMemory=4096 Domain 'testguest' defined successfully. Changes will take effect after the domain is fully powered off.",
"virsh setmem testguest --current 2048",
"virsh dominfo testguest Max memory: 4194304 KiB Used memory: 2097152 KiB",
"virsh domstats --balloon testguest Domain: 'testguest' balloon.current=365624 balloon.maximum=4194304 balloon.swap_in=0 balloon.swap_out=0 balloon.major_fault=306 balloon.minor_fault=156117 balloon.unused=3834448 balloon.available=4035008 balloon.usable=3746340 balloon.last-update=1587971682 balloon.disk_caches=75444 balloon.hugetlb_pgalloc=0 balloon.hugetlb_pgfail=0 balloon.rss=1005456",
"virsh edit testguest",
"<memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking>",
"cat /proc/meminfo | grep Huge HugePages_Total: 4 HugePages_Free: 2 HugePages_Rsvd: 1 Hugepagesize: 1024000 kB",
"grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_movable",
"grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_kernel",
"grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online",
"grubby --update-kernel=ALL --remove-args=\"memory_hotplug.online_policy\" --args=memory_hotplug.online_policy=auto-movable",
"grubby --update-kernel=ALL --remove-args=\"memory_hotplug.auto_movable_ratio\" --args=memory_hotplug.auto_movable_ratio= <percentage> grubby --update-kernel=ALL --remove-args=\"memory_hotplug.memory_auto_movable_numa_aware\" --args=memory_hotplug.auto_movable_numa_aware= <y/n>",
"cat /sys/devices/system/memory/auto_online_blocks online_movable",
"cat /sys/devices/system/memory/auto_online_blocks online_kernel",
"cat /sys/devices/system/memory/auto_online_blocks online",
"cat /sys/module/memory_hotplug/parameters/online_policy auto-movable",
"cat /sys/module/memory_hotplug/parameters/auto_movable_ratio 301",
"cat /sys/module/memory_hotplug/parameters/auto_movable_numa_aware y",
"virsh edit testguest1 <domain type='kvm'> <name>testguest1</name> <maxMemory unit='GiB'>128</maxMemory> </domain>",
"vim virtio-mem-device.xml",
"<memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>0</node> <block unit='MiB'>2</block> <requested unit='GiB'>16</requested> <current unit='GiB'>16</current> </target> <alias name='ua-virtiomem0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memory> <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>1</node> <block unit='MiB'>2</block> <requested unit='GiB'>0</requested> <current unit='GiB'>0</current> </target> <alias name='ua-virtiomem1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memory>",
"virsh attach-device testguest1 virtio-mem-device.xml --live --config",
"virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 4GiB",
"virsh detach-device testguest1 virtio-mem-device.xml",
"virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 0",
"virsh detach-device testguest1 virtio-mem-device.xml --config",
"free -h total used free shared buff/cache available Mem: 31Gi 5.5Gi 14Gi 1.3Gi 11Gi 23Gi Swap: 8.0Gi 0B 8.0Gi",
"numactl -H available: 1 nodes (0) node 0 cpus: 0 1 2 3 4 5 6 7 node 0 size: 29564 MB node 0 free: 13351 MB node distances: node 0 0: 10",
"virsh dumpxml testguest1 <domain type='kvm'> <name>testguest1</name> <currentMemory unit='GiB'>31</currentMemory> <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>0</node> <block unit='MiB'>2</block> <requested unit='GiB'>16</requested> <current unit='GiB'>16</current> </target> <alias name='ua-virtiomem0'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </domain>",
"<domain> [...] <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> [...] </domain>",
"virsh blkiotune VM-name --device-weights device , I/O-weight",
"virsh blkiotune testguest1 --device-weights /dev/sda, 500",
"virsh blkiotune testguest1 Block I/O tuning parameters for domain testguest1: weight : 800 device_weight : [ {\"sda\": 500}, ]",
"virsh domblklist rollin-coal Target Source ------------------------------------------------ vda /var/lib/libvirt/images/rollin-coal.qcow2 sda - sdb /home/horridly-demanding-processes.iso",
"lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 252:0 0 4G 0 disk [SWAP] nvme0n1 259:0 0 238.5G 0 disk ├─nvme0n1p1 259:1 0 600M 0 part /boot/efi ├─nvme0n1p2 259:2 0 1G 0 part /boot └─nvme0n1p3 259:3 0 236.9G 0 part └─luks-a1123911-6f37-463c-b4eb-fxzy1ac12fea 253:0 0 236.9G 0 crypt /home",
"virsh blkiotune VM-name --parameter device , limit",
"virsh blkiotune rollin-coal --device-read-iops-sec /dev/nvme0n1p3,1000 --device-write-iops-sec /dev/nvme0n1p3,1000 --device-write-bytes-sec /dev/nvme0n1p3,52428800 --device-read-bytes-sec /dev/nvme0n1p3,52428800",
"virsh edit <example_vm>",
"<disk type='block' device='disk'> <driver name='qemu' type='raw' queues='N' /> <source dev='/dev/sda'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>",
"<controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller>",
"virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <vcpu placement='static'>8</vcpu> <iothreads>1</iothreads> </domain>",
"virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1' /> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> </devices> </domain>",
"virsh edit <testguest1> <domain type='kvm'> <name>testguest1</name> <devices> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1' /> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </controller> </devices> </domain>",
"virsh edit <vm_name>",
"<domain type='kvm'> <name>testguest1</name> <devices> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' iothread='1'/> <source file='/var/lib/libvirt/images/test-disk.raw'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> </devices> </domain>",
"virt-xml testguest1 --edit --cpu host-model",
"virsh vcpucount testguest maximum config 4 maximum live 2 current config 2 current live 1",
"virsh setvcpus testguest 8 --maximum --config",
"virsh setvcpus testguest 4 --live",
"virsh setvcpus testguest 1 --config",
"virsh vcpucount testguest maximum config 8 maximum live 4 current config 1 current live 4",
"virsh nodeinfo CPU model: x86_64 CPU(s): 48 CPU frequency: 1200 MHz CPU socket(s): 1 Core(s) per socket: 12 Thread(s) per core: 2 NUMA cell(s): 2 Memory size: 67012964 KiB",
"dnf install numactl",
"virt-xml testguest5 --edit --vcpus placement=auto virt-xml testguest5 --edit --numatune mode=preferred",
"echo 1 > /proc/sys/kernel/numa_balancing",
"systemctl start numad",
"numactl --hardware available: 2 nodes (0-1) node 0 size: 18156 MB node 0 free: 9053 MB node 1 size: 18180 MB node 1 free: 6853 MB node distances: node 0 1 0: 10 20 1: 20 10",
"virsh edit <testguest6> <domain type='kvm'> <name>testguest6</name> <vcpu placement='static'>16</vcpu> <cpu ...> <numa> <cell id='0' cpus='0-7' memory='16' unit='GiB'/> <cell id='1' cpus='8-15' memory='16' unit='GiB'/> </numa> </domain>",
"lscpu -p=node,cpu Node,CPU 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 1,0 1,1 1,2 1,3 1,4 1,5 1,6 1,7",
"lscpu -p=node,cpu Node,CPU 0,0 0,1 0,2 0,3",
"virsh vcpupin testguest6 0 1 virsh vcpupin testguest6 1 3 virsh vcpupin testguest6 2 5 virsh vcpupin testguest6 3 7",
"virsh vcpupin testguest6 VCPU CPU Affinity ---------------------- 0 1 1 3 2 5 3 7",
"virsh emulatorpin testguest6 2,4 virsh emulatorpin testguest6 emulator: CPU Affinity ---------------------------------- *: 2,4",
"virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 0 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0",
"virsh schedinfo <vm_name> --set vcpu_period=100000",
"virsh schedinfo <vm_name> --set vcpu_quota=50000",
"virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 2048 vcpu_period : 100000 vcpu_quota : 50000",
"virsh schedinfo <vm_name> Scheduler : posix cpu_shares : 1024 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0",
"virsh schedinfo <vm_name> --set cpu_shares=2048 Scheduler : posix cpu_shares : 2048 vcpu_period : 0 vcpu_quota : 0 emulator_period: 0 emulator_quota : 0 global_period : 0 global_quota : 0 iothread_period: 0 iothread_quota : 0",
"{PackageManagerCommand} install ksmtuned",
"systemctl start ksm systemctl start ksmtuned",
"systemctl enable ksm Created symlink /etc/systemd/system/multi-user.target.wants/ksm.service /usr/lib/systemd/system/ksm.service systemctl enable ksmtuned Created symlink /etc/systemd/system/multi-user.target.wants/ksmtuned.service /usr/lib/systemd/system/ksmtuned.service",
"systemctl stop ksm systemctl stop ksmtuned",
"systemctl disable ksm Removed /etc/systemd/system/multi-user.target.wants/ksm.service. systemctl disable ksmtuned Removed /etc/systemd/system/multi-user.target.wants/ksmtuned.service.",
"echo 2 > /sys/kernel/mm/ksm/run",
"lsmod | grep vhost vhost_net 32768 1 vhost 53248 1 vhost_net tap 24576 1 vhost_net tun 57344 6 vhost_net",
"modprobe vhost_net",
"<interface type='network'> <source network='default'/> <model type='virtio'/> <driver name='vhost' queues='N'/> </interface>",
"ethtool -C tap0 rx-frames 64",
"dnf install perf",
"perf kvm stat report Analyze events for all VMs, all VCPUs: VM-EXIT Samples Samples% Time% Min Time Max Time Avg time EXTERNAL_INTERRUPT 365634 31.59% 18.04% 0.42us 58780.59us 204.08us ( +- 0.99% ) MSR_WRITE 293428 25.35% 0.13% 0.59us 17873.02us 1.80us ( +- 4.63% ) PREEMPTION_TIMER 276162 23.86% 0.23% 0.51us 21396.03us 3.38us ( +- 5.19% ) PAUSE_INSTRUCTION 189375 16.36% 11.75% 0.72us 29655.25us 256.77us ( +- 0.70% ) HLT 20440 1.77% 69.83% 0.62us 79319.41us 14134.56us ( +- 0.79% ) VMCALL 12426 1.07% 0.03% 1.02us 5416.25us 8.77us ( +- 7.36% ) EXCEPTION_NMI 27 0.00% 0.00% 0.69us 1.34us 0.98us ( +- 3.50% ) EPT_MISCONFIG 5 0.00% 0.00% 5.15us 10.85us 7.88us ( +- 11.67% ) Total Samples:1157497, Total events handled time:413728274.66us.",
"numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51722 (qemu-kvm) 68 16 357 6936 2 3 147 598 8128 51747 (qemu-kvm) 245 11 5 18 5172 2532 1 92 8076 53736 (qemu-kvm) 62 432 1661 506 4851 136 22 445 8116 53773 (qemu-kvm) 1393 3 1 2 12 0 0 6702 8114 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 1769 463 2024 7462 10037 2672 169 7837 32434",
"numastat -c qemu-kvm Per-node process memory usage (in MBs) PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- 51747 (qemu-kvm) 0 0 7 0 8072 0 1 0 8080 53736 (qemu-kvm) 0 0 7 0 0 0 8113 0 8120 53773 (qemu-kvm) 0 0 7 0 0 0 1 8110 8118 59065 (qemu-kvm) 0 0 8050 0 0 0 0 0 8051 --------------- ------ ------ ------ ------ ------ ------ ------ ------ ----- Total 0 0 8072 0 8072 0 8114 8110 32368"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/optimizing-virtual-machine-performance-in-rhel_configuring-and-managing-virtualization |
8.204. rsh | 8.204. rsh 8.204.1. RHBA-2014:0795 - rsh bug fix update Updated rsh packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The rsh packages contain programs which allow users to run commands on remote machines, log in to other machines, copy files between machines (rsh, rlogin and rcp), and provide an alternate method of executing remote commands (rexec). All of these programs are run by the xinetd daemon and can be configured using the Pluggable Authentication Modules (PAM) system and configuration files in the /etc/xinetd.d/ directory. Bug Fixes BZ# 749283 Previously, the rshd daemon performed redundant calls of the setpwent() and endpwent() functions. As a consequence, rshd queried Network Information Security (NIS) servers on every remote shell (rsh) access. With this update, these redundant calls have been removed and rshd no longer contacts NIS servers unnecessarily. BZ# 802367 Prior to this update, the maximum number of command line arguments for the rsh application was not limited. However, the volume of data buffer allocated to the arguments is always finite. Consequently, rshd terminated unexpectedly when it attempted to allocate the buffer to commands with a vast number of arguments. This update implements a limit for command-line arguments in rsh, and the described rshd crash no longer occurs. BZ# 1098955 Previously, the pam_close_session() function was not called when a remote copy (rcp) connection completed. As a consequence, the PAM session did not terminate correctly. With this update, pam_close_session() is called and the PAM session terminates as intended. BZ# 1094360 Prior to this update, the rsh application was optimized through strict aliasing rules, even though it is not a performance-sensitive application. As a consequence, the GNU compiler collection (GCC) generated warning messages about breaking the strict-aliasing rules, despite correct functionality being the priority for rsh. With this update, strict aliasing has been disabled for rsh. Therefore, GCC now ignores the strict aliasing rules and no longer interrupts rsh processes with warning messages. However, this may also lead to a slight decrease in performance. Users of rsh are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rsh |
Chapter 2. Installation | Chapter 2. Installation This chapter guides you through the steps to install AMQ Spring Boot Starter in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To build programs with AMQ Spring Boot Starter, you must install Apache Maven . To use AMQ Spring Boot Starter, you must install Java. 2.2. Using the Red Hat Maven repository Configure your Maven environment to download the client library from the Red Hat Maven repository. Procedure Add the Red Hat repository to your Maven settings or POM file. For example configuration files, see Section B.1, "Using the online repository" . <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> Add the library dependency to your POM file. <dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>2.5.0.redhat-00001</version> </dependency> The client is now available in your Maven project. 2.3. Installing a local Maven repository As an alternative to the online repository, AMQ Spring Boot Starter can be installed to your local filesystem as a file-based Maven repository. Procedure Use your subscription to download the AMQ Clients 2.10.0 Spring Boot Starter Maven repository .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-clients-2.10.0-spring-boot-starter-maven-repository.zip On Windows, right-click the .zip file and select Extract All . Configure Maven to use the repository in the maven-repository directory inside the extracted install directory. For more information, see Section B.2, "Using a local repository" . | [
"<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>",
"<dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>2.5.0.redhat-00001</version> </dependency>",
"unzip amq-clients-2.10.0-spring-boot-starter-maven-repository.zip"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_spring_boot_starter/installation |
Chapter 8. Using Bring-Your-Own-Host (BYOH) Windows instances as nodes | Chapter 8. Using Bring-Your-Own-Host (BYOH) Windows instances as nodes Bring-Your-Own-Host (BYOH) allows for users to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users looking to mitigate major disruptions in the event that a Windows server goes offline. 8.1. Configuring a BYOH Windows instance Creating a BYOH Windows instance requires creating a config map in the Windows Machine Config Operator (WMCO) namespace. Prerequisites Any Windows instances that are to be attached to the cluster as a node must fulfill the following requirements: The instance must be on the same network as the Linux worker nodes in the cluster. Port 22 must be open and running an SSH server. The default shell for the SSH server must be the Windows Command shell , or cmd.exe . Port 10250 must be open for log collection. An administrator user is present with the private key used in the secret set as an authorized SSH key. If you are creating a BYOH Windows instance for an installer-provisioned infrastructure (IPI) AWS cluster, you must add a tag to the AWS instance that matches the spec.template.spec.value.tag value in the machine set for your worker nodes. For example, kubernetes.io/cluster/<cluster_id>: owned or kubernetes.io/cluster/<cluster_id>: shared . If you are creating a BYOH Windows instance on vSphere, communication with the internal API server must be enabled. The hostname of the instance must follow the RFC 1123 DNS label requirements, which include the following standards: Contains only lowercase alphanumeric characters or '-'. Starts with an alphanumeric character. Ends with an alphanumeric character. Note Windows instances deployed by the WMCO are configured with the containerd container runtime. Because the WMCO installs and manages the runtime, it is recommended that you not manually install containerd on nodes. Procedure Create a ConfigMap named windows-instances in the WMCO namespace that describes the Windows instances to be added. Note Format each entry in the config map's data section by using the address as the key while formatting the value as username=<username> . Example config map kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core 1 The address that the WMCO uses to reach the instance over SSH, either a DNS name or an IPv4 address. A DNS PTR record must exist for this address. It is recommended that you use a DNS name with your BYOH instance if your organization uses DHCP to assign IP addresses. If not, you need to update the windows-instances ConfigMap whenever the instance is assigned a new IP address. 2 The name of the administrator user created in the prerequisites. 8.2. Removing BYOH Windows instances You can remove BYOH instances attached to the cluster by deleting the instance's entry in the config map. Deleting an instance reverts that instance back to its state prior to adding to the cluster. Any logs and container runtime artifacts are not added to these instances. For an instance to be cleanly removed, it must be accessible with the current private key provided to WMCO. For example, to remove the 10.1.42.1 instance from the example, the config map would be changed to the following: kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core Deleting windows-instances is viewed as a request to deconstruct all Windows instances added as nodes. | [
"kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: 10.1.42.1: |- 1 username=Administrator 2 instance.example.com: |- username=core",
"kind: ConfigMap apiVersion: v1 metadata: name: windows-instances namespace: openshift-windows-machine-config-operator data: instance.example.com: |- username=core"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/windows_container_support_for_openshift/byoh-windows-instance |
Appendix B. Revision History | Appendix B. Revision History Note that revision numbers relate to the edition of this manual, not to version numbers of Red Hat Enterprise Linux. Revision History Revision 7.0-31 Wed Nov 11 2020 Florian Delehaye Updated with minor fixes for 7.9 GA publication. Revision 7.0-30 Tue Aug 06 2019 Marc Muehlfeld Document version for 7.7 GA publication. Revision 7.0-29 Tue Apr 08 2019 Marc Muehlfeld Added Configuring the Files Provider for SSSD and Displaying User Data . Minor fixes and updates. Revision 7.0-28 Fri Oct 26 2018 Filip Hanzelka Preparing document for 7.6 GA publication. Revision 7.0-27 Mon Jun 25 2018 Filip Hanzelka Updated Working with certmonger . Revision 7.0-26 Tue Apr 10 2018 Filip Hanzelka Preparing document for 7.5 GA publication. Revision 7.0-25 Mon Mar 19 2018 Lucie Manaskova Minor updates. Revision 7.0-24 Mon Feb 12 2018 Aneta Steflova Petrova Minor fixes and updates. Revision 7.0-23 Mon Jan 29 2018 Aneta Steflova Petrova Minor fixes. Revision 7.0-22 Mon Dec 4 2017 Aneta Steflova Petrova Updated Smart cards . Revision 7.0-21 Mon Nov 20 2017 Aneta Steflova Petrova Minor fixes. Revision 7.0-20 Mon Nov 6 2017 Aneta Steflova Petrova Minor fixes. Revision 7.0-19 Mon Aug 14 2017 Aneta Steflova Petrova Updated sections that referred to the coolkey package. Revision 7.0-18 Tue Jul 18 2017 Aneta Steflova Petrova Document version for 7.4 GA publication. Revision 7.0-17 Mon Mar 27 2017 Aneta Steflova Petrova Fixed broken links. Revision 7.0-16 Mon Feb 27 2017 Aneta Steflova Petrova Updated Kerberos KDC proxy. Other minor updates. Revision 7.0-15 Wed Dec 7 2016 Aneta Steflova Petrova Added SSSD client-side views. Other minor updates. Revision 7.0-14 Tue Oct 18 2016 Aneta Steflova Petrova Version for 7.3 GA publication. Revision 7.0-13 Wed Jul 27 2016 Marc Muehlfeld Added Kerberos over HTTP (kdcproxy), requesting a certificate through SCEP, other minor updates. Revision 7.0-11 Thu Mar 03 2016 Aneta Petrova Added restricting domains for PAM services. Revision 7.0-10 Tue Feb 09 2016 Aneta Petrova Split authconfig chapter into smaller chapters, other minor updates. Revision 7.0-9 Thu Nov 12 2015 Aneta Petrova Version for 7.2 GA release. Revision 7.0-8 Fri Mar 13 2015 Tomas Capek Async update with last-minute edits for 7.1. Revision 7.0-6 Wed Feb 25 2015 Tomas Capek Version for 7.1 GA release. Revision 7.0-4 Fri Dec 05 2014 Tomas Capek Rebuild to update the sort order on the splash page. Revision 7.0-1 July 16, 2014 Ella Deon Ballard Initial draft for RHEL 7.0. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/doc-history |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_release_notes/making-open-source-more-inclusive_6.0.0_rn |
Chapter 6. Configure Host Names | Chapter 6. Configure Host Names 6.1. Understanding Host Names There are three classes of hostname : static, pretty, and transient. The " static " host name is the traditional hostname , which can be chosen by the user, and is stored in the /etc/hostname file. The " transient " hostname is a dynamic host name maintained by the kernel. It is initialized to the static host name by default, whose value defaults to " localhost " . It can be changed by DHCP or mDNS at runtime. The " pretty " hostname is a free-form UTF8 host name for presentation to the user. Note A host name can be a free-form string up to 64 characters in length. However, Red Hat recommends that both static and transient names match the fully-qualified domain name ( FQDN ) used for the machine in DNS , such as host.example.com . It is also recommended that the static and transient names consists only of 7 bit ASCII lower-case characters, no spaces or dots, and limits itself to the format allowed for DNS domain name labels, even though this is not a strict requirement. Older specifications do not permit the underscore, and so their use is not recommended. The hostnamectl tool will enforce the following: Static and transient host names to consist of a-z , A-Z , 0-9 , " - " , " _ " and " . " only, to not begin or end in a dot, and to not have two dots immediately following each other. The size limit of 64 characters is enforced. 6.1.1. Recommended Naming Practices The Internet Corporation for Assigned Names and Numbers (ICANN) sometimes adds previously unregistered Top-Level Domains (such as .yourcompany ) to the public register. Therefore, Red Hat strongly recommends that you do not use a domain name that is not delegated to you, even on a private network, as this can result in a domain name that resolves differently depending on network configuration. As a result, network resources can become unavailable. Using domain names that are not delegated to you also makes DNSSEC more difficult to deploy and maintain, as domain name collisions require manual configuration to enable DNSSEC validation. See the ICANN FAQ on domain name collision for more information on this issue. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/ch-Configure_Host_Names |
Chapter 5. Cluster overview page | Chapter 5. Cluster overview page The Cluster overview page shows the status of a Kafka cluster. Here, you can assess the readiness of Kafka brokers, identify any cluster errors or warnings, and gain crucial insights into the cluster's health. At a glance, the page provides information on the number of topics and partitions within the cluster, along with their replication status. Explore cluster metrics through charts displaying used disk space, CPU utilization, and memory usage. Additionally, topic metrics offer a comprehensive view of total incoming and outgoing byte rates for all topics in the Kafka cluster. 5.1. Accessing cluster connection details for client access When connecting a client to a Kafka cluster, retrieve the necessary connection details from the Cluster overview page by following these steps. Procedure From the Streams for Apache Kafka Console, click the name of the Kafka cluster that you want to connect to, then click Cluster overview and Cluster connection details . Copy and add bootstrap address and connection properties to your Kafka client configuration to establish a connection with the Kafka cluster. Note Ensure that the authentication type used by the client matches the authentication type configured for the Kafka cluster. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_the_streams_for_apache_kafka_console/con-cluster-overview-page-str |
Chapter 21. Desktop and graphics | Chapter 21. Desktop and graphics 21.1. GNOME Shell is the default desktop environment RHEL 8 is distributed with GNOME Shell as the default desktop environment. All packages related to KDE Plasma Workspaces (KDE) have been removed, and it is no longer possible to use KDE as an alternative to the default GNOME desktop environment. Red Hat does not support migration from RHEL 7 with KDE to RHEL 8 GNOME. Users of RHEL 7 with KDE are recommended to back up their data and install RHEL 8 with GNOME Shell. 21.2. Notable changes in GNOME Shell RHEL 8 is distributed with GNOME Shell, version 3.28. This section: Highlights enhancements related to GNOME Shell, version 3.28. Informs about the change in default combination of GNOME Shell environment and display protocol. Explains how to access features that are not available by default. Explains changes in GNOME tools for software management. 21.2.1. GNOME Shell, version 3.28 in RHEL 8 GNOME Shell, version 3.28 is available in RHEL 8. Notable enhancements include: New GNOME Boxes features New on-screen keyboard Extended devices support, most significantly integration for the Thunderbolt 3 interface Improvements for GNOME Software, dconf-editor and GNOME Terminal 21.2.2. GNOME Shell environments GNOME 3 provides two essential environments: GNOME Standard GNOME Classic Both environments can use two different protocols to build a graphical user interface: The X11 protocol, which uses X.Org as the display server. The Wayland protocol, which uses GNOME Shell as the Wayland compositor and display server. This solution of display server is further referred as GNOME Shell on Wayland . The default combination in RHEL 8 is GNOME Standard environment using GNOME Shell on Wayland as the display server. However, you may want to switch to another combination of GNOME Shell environment and graphics protocol stack. For more information, see Section 21.3, "Selecting GNOME environment and display protocol" . Additional resources For more information about basics of using both GNOME Shell environments, see Overview of GNOME environments . 21.2.3. Desktop icons In RHEL 8, the Desktop icons functionality is no longer provided by the Nautilus file manager, but by the desktop icons gnome-shell extension. To be able to use the extension, you must install the gnome-shell-extension-desktop-icons package available in the Appstream repository. Additional resources For more information about Desktop icons in RHEL 8, see Managing desktop icons . 21.2.4. Fractional scaling On a GNOME Shell on Wayland session, the fractional scaling feature is available. The feature makes it possible to scale the GUI by fractions, which improves the appearance of scaled GUI on certain displays. Note that the feature is currently considered experimental and is, therefore, disabled by default. To enable fractional scaling, run the following command: 21.2.5. GNOME Software for package management The gnome-packagekit package that provided a collection of tools for package management in graphical environment on RHEL 7 is no longer available. On RHEL 8, similar functionality is provided by the GNOME Software utility, which enables you to install and update applications and gnome-shell extensions. GNOME Software is distributed in the gnome-software package. Additional resources For more information for installing applications with GNOME software , see Installing applications in GNOME . 21.2.6. Opening graphical applications with sudo When attempting to open a graphical application in a terminal using the sudo command, you must do the following: X11 applications If the application uses the X11 display protocol, add the local user root in the X server access control list. As a result, root is allowed to connect to Xwayland , which translates the X11 protocol into the Wayland protocol and reversely. Example 21.1. Adding root to the X server access control list to open xclock with sudo USD xhost +si:localuser:root USD sudo xclock Wayland applications If the application is Wayland native, include the -E option. Example 21.2. Opening GNOME Calculator with sudo USD sudo -E gnome-calculator Otherwise, if you type just sudo and the name of the application, the operation of opening the application fails with the following error message: 21.3. Selecting GNOME environment and display protocol For switching between various combinations of GNOME environment and graphics protocol stacks, use the following procedure. Procedure From the login screen (GDM), click the gear button to the Sign In button. Note You cannot access this option from the lock screen. The login screen appears when you first start RHEL 8 or when you log out of your current session. From the drop-down menu that appears, select the option that you prefer. Note Note that in the menu that appears on the login screen, the X.Org display server is marked as X11 display server. Important The change of GNOME environment and graphics protocol stack resulting from the above procedure is persistent across user logouts, and also when powering off or rebooting the computer. 21.4. Removed functionality gnome-terminal removed support for non-UTF8 locales in RHEL 8 The gnome-terminal application in RHEL 8 and later releases refuses to start when the system locale is set to non-UTF8 because only UTF8 locales are supported. For more information, see the The gnome-terminal application fails to start when the system locale is set to non-UTF8 Knowledgebase article. | [
"gsettings set org.gnome.mutter experimental-features \"['scale-monitor-framebuffer']\"",
"No protocol specified Unable to init server: could not connect: connection refused Failed to parse arguments: Cannot open display"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/desktop-and-graphics_considerations-in-adopting-rhel-8 |
7.238. sssd | 7.238. sssd 7.238.1. RHSA-2013:0508 - Low: sssd security, bug fix and enhancement update Updated sssd packages that fix two security issues, multiple bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The System Security Services Daemon (SSSD) provides a set of daemons to manage access to remote directories and authentication mechanisms. It provides an NSS and PAM interface toward the system and a pluggable back-end system to connect to multiple different account sources. It is also the basis to provide client auditing and policy services for projects such as FreeIPA. Note The sssd packages have been upgraded to upstream version 1.9.2, which provides a number of bug fixes and enhancements over the version. BZ#827606 Security Fixes CVE-2013-0219 A race condition was found in the way SSSD copied and removed user home directories. A local attacker who is able to write into the home directory of a different user who is being removed could use this flaw to perform symbolic link attacks, possibly allowing them to modify and delete arbitrary files with the privileges of the root user. CVE-2013-0220 Multiple out-of-bounds memory read flaws were found in the way the autofs and SSH service responders parsed certain SSSD packets. An attacker could spend a specially-crafted packet that, when processed by the autofs or SSH service responders, would cause SSSD to crash. This issue only caused a temporary denial of service, as SSSD was automatically restarted by the monitor process after the crash. The CVE-2013-0219 and CVE-2013-0220 issues were discovered by Florian Weimer of the Red Hat Product Security Team. Bug Fixes BZ# 854619 When SSSD was built without sudo support, the ldap_sudo_search_base value was not set and the namingContexts LDAP attribute contained a zero-length string. Consequently, SSSD tried to set ldap_sudo_search_base with this string and failed. Therefore, SSSD was unable to establish a connection with the LDAP server and switched to offline mode. With this update, SSSD considers the zero-length namingContexts value the same way as if no value is available; thus preventing this bug. Note that this issue was primarily affecting Novell eDirectory server users. BZ#840089 When the ldap_chpass_update_last_change option was enabled, the shadowLastChange attribute contained a number of seconds instead of days. Consequently, when shadowLastChange was in use and the user was prompted to update their expiring password, shadowLastChange was not updated. The user then continued to get an error until they were locked out of the system. With this update, the number of days is stored in shadowLastChange attribute and users are able to change their expiring passwords as expected. BZ#847039 When the kpasswd server was configured but was unreachable during authentication, SSSD considered it the same way as if the KDC server was unreachable. As a consequence, the user failed to authenticate. Now, SSSD considers an unreachable kpasswd server as a fatal error only when performing a password change and users can log in successfully. BZ#847043 Previously, canceling a pthread which was in the midst of any SSS client usage could leave the client mutex locked. As a consequence, the call to any SSS function became unresponsive, waiting for the mutex to unlock. With this update, a more robust mutex is used, and canceling such a pthread no longer keeps the client mutex locked. BZ# 872324 When SSSD created an SELinux login file, it erroneously kept the file descriptor of this file opened. As a consequence, the number of the file descriptors used by SSSD increased every time a user logged in. SSSD now closes the file descriptor when it is no longer needed, thus protecting it from leaking. BZ# 801719 Previously, reverse DNS lookup was not performed to get the Fully Qualified Domain Name (FQDN) of a host specified by an IP address. As a consequence, SSH host public key lookup was incorrectly attempted with the textual IP address as an FQDN. Reverse DNS lookup is now performed to get the FQDN of the host before the SSH host public key lookup. SSH host public key lookup now functions correctly using the FQDN of the host. BZ#857108 Kerberos options were loaded separately in the krb5 utility and the IPA provider with different code paths. The code was fixed in krb5 but not in the IPA provider. Consequently, a Kerberos ticket was not renewed in time when IPA was used as an authentication provider. With this update, Kerberos options are loaded using a common API and Kerberos tickets are renewed as expected in the described scenario. BZ#849081 When SSSD was configured to use SSL during communication with an LDAP server and the initialization of SSL failed, SSSD kept the connection to the LDAP server opened. As a consequence, the number of connections to the LDAP server was increased with every request via SSSD, until the LDAP server ran out of available file descriptors. With this update, when the SSL initialization fails, SSSD closes the connection immediately and the number of connections does not grow. BZ#819057 If the LDAP provider was configured to use GSSAPI authentication but the first configured Kerberos server to authenticate against was offline, then SSSD did not retry the other, possibly working servers. The failover code was amended so that all Kerberos servers are tried when GSSAPI authentication is performed in the LDAP provider. The LDAP provider is now able to authenticate against servers that are only configured as failover. BZ# 822404 Previously, SSSD did not use the correct attribute mapping when a custom schema was used. As a consequence, if the administrator configured SSSD with a custom attribute map, the autofs integration did not work. The attribute mapping was fixed and SSSD now works with a custom attribute schema. BZ#826192, BZ# 827036 In some cases, the SSSD responder processes did not properly close the file descriptors they used to communicate with the client library. As a consequence, the descriptors leaked, and, over time, caused denial of service because SSSD reached the limit of open file descriptors defined in the system. SSSD now proactively closes file descriptors that were not active for some time, making the file descriptor usage consistent. BZ#829742 The SSSD back-end process kept a pointer to the server it was connected to in all cases, even when the server entry was about to expire. Most customers encountered this issue when SRV resolution was enabled. As a consequence, when the server entry expired while SSSD was using it, the back-end process crashed. An additional check has been added to SSSD to ensure the server object is valid before using it. SSSD no longer crashes when using SRV discovery. BZ# 829740 When the SSSD daemon was in the process of starting, the parent processes quit right after spawning the child process. As a consequence, the init script printed [OK] after the parent process terminated, which was before SSSD was actually functional. After this update, the parent processes are not terminated until all worker processes are up. Now, the administrator can start using SSSD after the init script prints [OK]. BZ#836555 Previously, SSSD always treated the values of attributes that configure the "shadow" LDAP password policy as absolute. As a consequence, an administrator could not configure properties of the "shadow" LDAP password policy as "valid forever". The LDAP "shadow" password attributes are now extended to also allow "-1" as a valid value and an administrator can use the reserved value of "-1" as a "valid forever". BZ#842753 When a service with a protocol was requested from SSSD, SSSD performed access to an unallocated memory space, which caused it to occasionally crash during service lookup. Now, SSSD does not access unallocated memory and no longer crashes during service lookups. BZ#842842 When the LDAP user record contained an empty attribute, the user was not stored correctly in the SSSD cache. As a consequence, the user and group memberships were missing. After this update, empty attributes are not considered an error and the user is stored correctly in the SSSD cache. As a result, the user is present and the group membership can be successfully evaluated. BZ# 845251 When multiple servers were configured and SSSD was unable to resolve the host name of a server, it did not try the server in the list. As a consequence, SSSD went offline even when a working server was present in the configuration file after the one with the unresolvable hostname. SSSD now tries the server in the list and failover works as expected. BZ#847332 Previously, the description of ldap_*_search_base options in the sssd-ldap(5) man page was missing syntax details for these options which made it unclear how the search base should be specified. The description of ldap_*_search_base options in sssd-ldap(5) man page has been amended so that the format of the search base is now clear. BZ#811984 If the krb5_canonicalize option was set to True or not present at all in the /etc/sssd/sssd.conf file, the client principal could change as a result of the canonicalization. However, SSSD still saved the original principal. As the incorrect principal was saved, the GSSAPI authentication failed. The Kerberos helper process that saves the principals was amended so that the canonicalized principal is saved if canonicalization is enabled. The GSSAPI binds now work correctly even for cases where the principal is changed as a result of the canonicalization. BZ# 886038 Previously, SSSD kept the file descriptors to the log files open. Consequently on occasions like moving the actual log file and restarting the back end, SSSD still kept the file descriptors open. After this update, SSSD closes the file descriptor after child process execution. As a result, after successful start of the back end, the file descriptor to log files is closed. BZ# 802718 Previously, the proxy domain type of SSSD allowed looking up a user only by its "primary name" in the LDAP server. If SSSD was configured with a "proxy domain" and the LDAP entry contained more name attributes, only the primary one could be used for lookups. For this update, the proxy provider was enhanced to also handle aliases in addition to primary user names. An administrator can now look up a user by any of his names when using the proxy provider. BZ# 869013 The sudo "smart refresh" operation was not performed if the LDAP server did not contain any rule when SSSD was started. As a consequence, newly created sudo rules were found after a longer period of time than the "ldap_sudo_smart_refresh_interval" option displayed. The sudo "smart refresh" operation is now performed and newly created sudo rules are found within the ldap_sudo_smart_refresh_interval time span. BZ#790090 The SSSD "local" domain (id_provider=local) performed a bad check on the validity of the access_provider value. If the access_provider option was set with "permit", which is a correct value, SSSD failed with an error. The check for the access_provider option value has been corrected and SSSD now allows the correct access_provider value for domains with id_provider=local. BZ# 874579 Previously, SELinux usermap contexts were not ordered correctly if the SELinux mappings were using HBAC rules as a definition of what users to apply the mapping to and if the Identity Management server was not reachable at the same time. As a consequence, an invalid SELinux context could be assigned to a user. SELinux usermap contexts are now ordered correctly, and the SELinux context is assigned to a user successfully. BZ#700805 If SSSD was configured to locate servers using SRV queries, but the default DNS domain was not configured, SSSD printed a DEBUG message. The DEBUG message, which contained an "unknown domain" string, could confuse the user. The DEBUG messages were fixed so that they specifically report that the DNS domain is being looked up, and only print known domains. BZ#871424 Previously, the chpass_provider directive was missing in the SSSD authconfig API. As a consequence, the authconfig utility was unable to configure SSSD if the chpass_provider option was present in the SSSD configuration file. The chpass_provider option has been included in the SSSD authconfig API and now the authconfig utility does not consider this option to be incorrect. BZ# 874618 Previously, the sss_cache tool did not accept fully qualified domain names (FQDN). As a consequence, the administrator was unable to force the expiration of a user record in the SSSD cache with a FQDN. The sss_cache tool now accepts an FQDN and the administrator is able to force the expiration of a user record in the SSSD cache with an FQDN. BZ# 870039 Previously, when the sss_cache tool was run after an SSSD downgrade, the cache file remained the same as the one used for the version of SSSD. The sss_cache tool could not manipulate the cache file and a confusing error message was printed. The "invalid database version" error message was improved in the sss_cache tool. Now, when an invalid cache version is detected, the sss_cache tool prints a suggested solution. BZ# 882923 When the proxy provider did not succeed in finding a requested user, the result of the search was not stored in the negative cache (which stores entries that are not found when searched for). A subsequent request for the same user was not answered by the negative cache, but was rather looked up again from the remote server. This bug had a performance impact. The internal error codes were fixed, allowing SSSD to store search results that yielded no entries into the negative cache. Subsequent lookups for non-existent entries are answered from the negative cache and, by effect, are very fast. BZ# 884600 Previously, during LDAP authentication, SSSD attempted to contact all of the servers on the server list if every server failed. However, SSSD tried to connect to the server only if the current connection timed out. SSSD now tries to contact the server on any error and connection attempts work as expected. BZ# 861075 When the sssd_be process was forcefully terminated, the SSSD responder processes failed to reconnect if the attempt was performed before the sssd_be process was ready. This caused the responder to be restarted. Occasionally, the responder restarted several times before sssd_be was ready, hitting the maximum number of restarts threshold, after which it was terminated completely. As a consequence, the SSSD responder was not gracefully restarted. After this update, each restart of the SSSD responder process is done with an increasing delay, so that the sssd_be process has enough time to recover before a responder is restarted. BZ#858345 Previously, the sssd_pam responder was not properly configured to recover from a back end disconnection. The PAM requests that were pending before the disconnection were not canceled. Thus, new requests for the same user were erroneously detected as similar requests and piled up on top of the ones. This caused the PAM operation to time out with the following error: As a consequence, the user could not log in. After this update, pending requests are canceled after disconnection and the user is able to log in when the pam responder reconnects. BZ# 873032 Previously, the sss_cache utility was not included in the main SSSD package and users were unaware of it, unless they installed the sssd-tools package. After this update, the sss_cache utility has been moved to the sssd package. BZ# 872683 When the anonymous bind was disabled and enumeration was enabled, SSSD touched an invalid array element during enumeration because the array was not NULL terminated. This caused the sssd_be process to crash. The array is now NULL terminated and the sssd_be process does not crash during enumeration when the anonymous bind is disabled. BZ# 870505 When SSSD was configured with multiple domains, the sss_cache tool searched for an object only in the first configured domain and ignored the others. As a consequence, the administrator could not use the sss_cache utility on objects from an arbitrary domain. The sss_cache tool now searches all domains and the administrator can use the tool on objects from an arbitrary domain. Enhancements BZ# 768168 , BZ# 832120 , BZ# 743505 A new ID mapping library that is capable of automatically generating UNIX IDs from Windows Security Identifiers (SIDs) has been added to SSSD. An administrator is now able to use Windows accounts easily in a UNIX environment. Also, a new Active Directory provider that contains the attribute mappings tailored specifically for use with Active Directory has been added to SSSD. When id_provider=ad is configured, the configuration no longer requires setting the attribute mappings manually. A new provider for SSSD has been implemented and the administrator can now set up an Active Directory client without having to know the specific Active Directory attribute mappings. The performance of the Active Directory provider is better than the performance of the LDAP provider, especially during login. BZ# 789470 When SSSD failed over to another server in its failover list, it stuck with that server as long as it worked. As a result, if the SSSD failed over to a server in another region, it did not reconnect to a closer server until it was restarted or until the backup server stopped working. The concept of a "backup server" has been introduced to SSSD and if SSSD fails over to a server which is listed as a backup server in the configuration, it periodically tries to reconnect to one of the primary servers. BZ#789473 A new sss_seed utility has been introduced in SSSD. An administrator can save a pre-seeded user entry into the SSSD cache which is used until the user can actually refresh the entry with a non-pre-seeded entry from the directory. BZ# 768165 Active Directory uses a nonstandard format when a large group that does not fit into a single "page" is returned. By default, the single page size contains 1500 members and if the response exceeds the page size, the range extension is used. If a group was stored on an Active Directory server which contained more than 1500 members, the response from Active Directory contained the proprietary format which SSSD could not parse. SSSD was improved so that it is able to parse the range extension and can now process groups with more than 1500 group members coming from the Active Directory. BZ# 766000 Previously, administrators were forced to distribute SELinux mappings via means that were error prone. Therefore, a centralized store of SELinux mappings was introduced to define which user gets which context after logging into a certain machine. SSSD is able to read mappings from an Identity Management server, process them according to a defined algorithm and select the appropriate SELinux context which is later consumed by the pam_selinux module. The Identity Management server administrator is now able to centrally define SELinux context mappings and the Identity Management clients process the mappings when a user logs in using his Identity Management credentials. BZ# 813327 The automounter can be configured to read autofs maps from a centralized server such as an LDAP server. But when the network is down or the server is not reachable, the automounter is unable to serve maps. A new responder has been introduced to SSSD that is able to communicate with the automounter daemon. Automounter can now request the maps via SSSD instead of going directly to the server. As a result, the automounter is able to serve maps even in case of an outage of the LDAP server. BZ# 761573 A new sudo responder has been implemented in SSSD as well as a client library in sudo itself. SSSD is able to act as a transparent proxy for serving sudo rules for the sudo binary. Now, when the centralized sudo rules source is not available, for instance when the network is down, SSSD is able to fall back to cached rules, providing transparent access to sudo rules from a centralized database. BZ# 789507 Prior to this update, even if a user entry was cached by SSSD, it had to be read from the cache file on the disk. This caused the cache readings to be slow in some performance-critical environments. A new layer of cache, stored in the memory was introduced, greatly improving the performance of returning cached entries. BZ#771412 The pam_pwd_expiration_warning option can be used to limit the number of days a password expiration warning is shown for. However, SSSD did not allow to unconditionally pass any password warning coming from the server to the client. The behavior of pam_pwd_expiration_warning was modified so that if the option is set to 0, it is always passed on to the client, regardless of the value of the warning. As a result, after setting the pam_pwd_expiration_warning option to 0, the administrator will always see the expiration warning if the server sends one. BZ#771975 The force_timeout option has been made configurable and the administrator can now change the force_timeout option for environments where SSSD subprocesses might be unresponsive for some time. All users of sssd are advised to upgrade to these updated packages, which correct these issues, fix these bugs and add these enhancements. | [
"Connection to SSSD failed: Timer Expired"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/sssd |
A.15. taskset | A.15. taskset The taskset tool is provided by the util-linux package. It allows administrators to retrieve and set the processor affinity of a running process, or launch a process with a specified processor affinity. Important taskset does not guarantee local memory allocation. If you require the additional performance benefits of local memory allocation, Red Hat recommends using numactl instead of taskset. To set the CPU affinity of a running process, run the following command: Replace processors with a comma delimited list of processors or ranges of processors (for example, 1,3,5-7 . Replace pid with the process identifier of the process that you want to reconfigure. To launch a process with a specified affinity, run the following command: Replace processors with a comma delimited list of processors or ranges of processors. Replace application with the command, options and arguments of the application you want to run. For more information about taskset , see the man page: | [
"taskset -pc processors pid",
"taskset -c processors -- application",
"man taskset"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-taskset |
Chapter 2. Using the Software Development Kit | Chapter 2. Using the Software Development Kit This section describes how to use the software development kit for Version 4. 2.1. Packages The following modules are most frequently used by the Python SDK: ovirtsdk4 This is the top level module. It most important element is the Connection class, which is the mechanism to connect to the server and to obtain the reference to the root of the services tree. The Error class is the base exception class that the SDK will raise when it needs to report an error. For certain kinds of errors, there are specific error classes, which extend the base error class: AuthError - Raised when authentication or authorization fails. ConnectionError - Raised when the name of the server cannot be resolved or the server is unreachable. NotFoundError - Raised when the requested object does not exist. TimeoutError - Raised when an operation times out. ovirtsdk4.types This module contains the classes that implement the types used in the API. For example, the ovirtsdk4.types.Vm class is the implementation of the virtual machine type. These classes are data containers and do not contain any logic. Instances of these classes are used as parameters and return values of service methods. The conversion to or from the underlying representation is handled transparently by the SDK. ovirtsdk4.services This module contains the classes that implement the services supported by the API. For example, the ovirtsdk4.services.VmsService class is the implementation of the service that manages the collection of virtual machines of the system. Instances of these classes are automatically created by the SDK when a service is located. For example, a new instance of the VmsService class is automatically created by the SDK when doing the following: vms_service = connection.system_service().vms_service() It is best to avoid creating instances of these classes manually, as the parameters of the constructors and, in general, all the methods except the service locators and service methods, may change in the future. There are other modules, like ovirtsdk4.http , ovirtsdk4.readers , and ovirtsdk4.writers . These are used to implement the HTTP communication and for XML parsing and rendering. Avoid using them, because they are internal implementation details that may change in the future; backwards compatibility is not guaranteed. 2.2. Connecting to the Server To connect to the server, import the ovirtsdk4 module, which contains the Connection class. This is the entry point of the SDK, and provides access to the root of the tree of services of the API: import ovirtsdk4 as sdk connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) The connection holds critical resources, including a pool of HTTP connections to the server and an authentication token. It is very important to free these resources when they are no longer in use: connection.close() Once a connection is closed, it cannot be reused. The ca.pem file is required when connecting to a server protected with TLS. In a normal installation, it is located in /etc/pki/ovirt-engine/ on the Manager machine. If you do not specify the ca_file , the system-wide CA certificate store will be used. For more information on obtaining the ca.pem file, see the REST API Guide . If the connection is not successful, the SDK will raise an ovirtsdk4.Error exception containing the details. 2.3. Using Types The classes in the ovirtsdk4.types module are pure data containers. They do not have any logic or operations. Instances of types can be created and modified at will. Creating or modifying an instance does not affect the server side, unless the change is explicitly passed with a call to one of the service methods described below. Changes on the server side are not automatically reflected in the instances that already exist in memory. The constructors of these classes have multiple optional arguments, one for each attribute of the type. This is intended to simplify creation of objects using nested calls to multiple constructors. This example creates an instance of a virtual machine, specifying its cluster name, template, and memory, in bytes: from ovirtsdk4 import types vm = types.Vm( name='vm1', cluster=types.Cluster( name='Default' ), template=types.Template( name='mytemplate' ), memory=1073741824 ) Using the constructors in this way is recommended, but not mandatory. You can also create the instance with no arguments in the call to the constructor and populate the object step by step, using the setters, or by using a mix of both approaches: vm = types.Vm() vm.name = 'vm1' vm.cluster = types.Cluster(name='Default') vm.template = types.Template(name='mytemplate') vm.memory=1073741824 Attributes that are defined as lists of objects in the specification of the API are implemented as Python lists. For example, the custom_properties attributes of the Vm type are defined as a list of objects of type CustomProperty . When the attributes are used in the SDK, they are a Python list: vm = types.Vm( name='vm1', custom_properties=[ types.CustomProperty(...), types.CustomProperty(...), ... ] ) Attributes that are defined as enumerated values in API are implemented as enum in Python, using the native support for enums in Python 3 and the enum34 package in Python 2.7. In this example, the status attribute of the Vm type is defined using the VmStatus enum : if vm.status == types.VmStatus.DOWN: ... elif vm.status == types.VmStatus.IMAGE_LOCKED: .... Note In the API specification, the values of enum types appear in lower case, because that is what is used for XML and JSON. The Python convention, however, is to capitalize enum values. Reading the attributes of instances of types is done using the corresponding properties: print("vm.name: %s" % vm.name) print("vm.memory: %s" % vm.memory) for custom_property in vm.custom_properties: ... 2.4. Using Links Some attributes of types are defined by the API as links. This convention indicates that the values are not normally populated when retrieving the representation of that object. Rather, a link is returned instead. For example, when retrieving a virtual machine, the XML response from the server includes the <link> attribute: The link to vm.diskattachments does not contain the actual disk attachments. To obtain the data, the Connection class provides a follow_link method that uses the value of the href XML attribute to retrieve the actual data. For example, to retrieve the details of the disks of the virtual machine, you follow the link to the disk attachments, and then to each of the disks: # Retrieve the virtual machine: vm = vm_service.get() # Follow the link to the disk attachments, and then to the disks: attachments = connection.follow_link(vm.disk_attachments) for attachment in attachments: disk = connection.follow_link(attachment.disk) print("disk.alias: " % disk.alias) 2.5. Locating Services The API provides a set of services, each associated with a path within the URL space of the server. For example, the service that manages the collection of virtual machines of the system is located in /vms , and the service that manages the virtual machine with identifier 123 is located in /vms/123 . In the SDK, the root of that tree of services is implemented by the system service. It is obtained calling the system_service method of the connection: system_service = connection.system_service() When you have the reference to this system service, you can use it to obtain references to other services, calling the *_service methods, called service locators, of the service. For example, to obtain a reference to the service that manages the collection of virtual machines of the system, you use the vms_service service locator: vms_service = system_service.vms_service() To obtain a reference to the service that manages the virtual machine with identifier 123 , you use the vm_service service locator of the service that manages the collection of virtual machines. It uses the identifier of the virtual machine as a parameter: vm_service = vms_service.vm_service('123') Important Calling service locators does not send a request to the server. The Python objects that they return are pure services, which do not contain any data. For example, the vm_service Python object called in this example is not the representation of a virtual machine. It is the service that is used to retrieve, update, delete, start and stop that virtual machine. 2.6. Using Services After you have located a service, you can call its service methods, which send requests to the server and do the real work. Services that manage a single object usually support the get , update , and remove methods. Services that manage collections of objects usually support the list and add methods. Both kinds of services, especially services that manage a single object, can support additional action methods. 2.6.1. Using get Methods These service methods are used to retrieve the representation of a single object. The following example retrieves the representation of the virtual machine with identifier 123 : # Find the service that manages the virtual machine: vms_service = system_service.vms_service() vm_service = vms_service.vm_service('123') # Retrieve the representation of the virtual machine: vm = vm_service.get() The response is an instance of the corresponding type, in this case an instance of the Python class ovirtsdk4.types.Vm . The get methods of some services support additional parameters that control how to retrieve the representation of the object or what representation to retrieve if there is more than one. For example, you may want to retrieve either the current state of a virtual machine or its state the time it is started, as they may be different. The get method of the service that manages a virtual machine supports a next_run Boolean parameter: # Retrieve the representation of the virtual machine, not the # current one, but the one that will be used after the # boot: vm = vm_service.get(next_run=True) See the reference documentation of the SDK for details. If the object cannot be retrieved for any reason, the SDK raises an ovirtsdk4.Error exception, with details of the failure. This includes the situation when the object does not actually exist. Note that the exception is raised when calling the get service method. The call to the service locator method never fails, even if the object does not exist, because that call does not send a request to the server. For example: # Call the service that manages a non-existent virtual machine. # This call will succeed. vm_service = vms_service.vm_service('junk') # Retrieve the virtual machine. This call will raise an exception. vm = vm_service.get() 2.6.2. Using list Methods These service methods retrieve the representations of the objects of a collection. This example retrieves the complete collection of virtual machines of the system: # Find the service that manages the collection of virtual # machines: vms_service = system_service.vms_service() # List the virtual machines in the collection vms = vms_service.list() The result will be a Python list containing the instances of corresponding types. For example, in this case, the result will be a list of instances of the class ovirtsdk4.types.Vm . The list methods of some services support additional parameters. For example, almost all top-level collections support a search parameter to filter the results or a max parameter to limit the number of results returned by the server. This example retrieves the names of virtual machines starting with my , with an upper limit of 10 results: vms = vms_service.list(search='name=my*', max=10) Note Not all list methods support these parameters. Some list methods support other parameters. See the reference documentation of the SDK for details. If a list of returned results is empty for any reason, the returned value will be an empty list. It will never be None . If there is an error while trying to retrieve the result, the SDK will raise an ovirtsdk4.Error exception containing the details of the failure. 2.6.3. Using add Methods These service methods add new elements to a collection. They receive an instance of the relevant type describing the object to add, send the request to add it, and return an instance of the type describing the added object. This example adds a new virtual machine called vm1 : from ovirtsdk4 import types # Add the virtual machine: vm = vms_service.add( vm=types.Vm( name='vm1', cluster=types.Cluster( name='Default' ), template=types.Template( name='mytemplate' ) ) ) If the object cannot be created for any reason, the SDK will raise an ovirtsdk4.Error exception containing the details of the failure. It will never return None . Important The Python object returned by this add method is an instance of the relevant type. It is not a service but a container of data. In this particular example, the returned object is an instance of the ovirtsdk4.types.Vm class. If, after creating the virtual machine, you need to perform an operation such as retrieving or starting it, you will first need to find the service that manages it, and call the corresponding service locator: # Add the virtual machine: vm = vms_service.add( ... ) # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Start the virtual machine vm_service.start() Objects are created asynchronously. When you create a new virtual machine, the add method will return a response before the virtual machine is completely created and ready to be used. It is good practice to poll the status of the object to ensure that it is completely created. For a virtual machine, you should check until its status is DOWN : # Add the virtual machine: vm = vms_service.add( ... ) # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Wait until the virtual machine is down, indicating that it is # completely created: while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.DOWN: break Using a loop to retrieve the object status, with the get method, ensures that the status attribute is updated. 2.6.4. Using update Methods These service methods update existing objects. They receive an instance of the relevant type describing the update to perform, send the request to update it, and return an instance of the type describing the updated object. This example updates the name of a virtual machine from vm1 to newvm : from ovirtsdk4 import types # Find the virtual machine, and then the service that # manages it: vm = vms_service.list(search='name=vm1')[0] vm_service = vm_service.vm_service(vm.id) # Update the name: updated_vm = vm_service.update( vm=types.Vm( name='newvm' ) ) When performing updates, avoid sending the complete representation of the object. Send only the attributes that you want to update. Do not do this: # Retrieve the complete representation: vm = vm_service.get() # Update the representation, in memory, without sending a request # to the server: vm.name = 'newvm' # Send the update. Do *not* do this. vms_service.update(vm) Sending the complete representation causes two problems: You are sending much more information than the server needs, thus wasting resources. The server will try to update all the attributes of the object, even those that you did not intend to change. This may cause bugs on the server side. The update methods of some services support additional parameters that control how or what to update. For example, you may want to update either the current state of a virtual machine or the state that will be used the time the virtual machine is started. The update method of the service that manages a virtual machine supports a next_run Boolean parameter: # Update the memory of the virtual machine to 1 GiB, # not during the current run, but after boot: vm = vm_service.update( vm=types.Vm( memory=1073741824 ), next_run=True ) If the update cannot be performed for any reason, the SDK will raise an ovirtsdk4.Error exception containing the details of the failure. It will never return None . The Python object returned by this update method is an instance of the relevant type. It is not a service, but a container for data. In this particular example, the returned object will be an instance of the ovirtsdk4.types.Vm class. 2.6.5. Using remove Methods These service methods remove existing objects. They usually do not take parameters, because they are methods of services that manage single objects. Therefore, the service already knows what object to remove. This example removes the virtual machine with identifier 123 : # Find the virtual machine by name: vm = vms_service.list(search='name=123')[0] # Find the service that manages the virtual machine using the ID: vm_service = vms_service.vm_service(vm.id) # Remove the virtual machine: vm_service.remove() The remove methods of some services support additional parameters that control how or what to remove. For example, it is possible to remove a virtual machine while preserving its disks, using the detach_only Boolean parameter: # Remove the virtual machine while preserving the disks: vm_service.remove(detach_only=True) The remove method returns None if the object is removed successfully. It does not return the removed object. If the object cannot be removed for any reason, the SDK raises an ovirtsdk4.Error exception containing the details of the failure. 2.6.6. Using Other Action Methods There are other service methods that perform miscellaneous operations, such as stopping and starting a virtual machine: # Start the virtual machine: vm_service.start() Many of these methods include parameters that modify the operation. For example, the method that starts a virtual machine supports a use_cloud_init parameter, if you want to start it using cloud-init : # Start the virtual machine: vm_service.start(cloud_init=True) Most action methods return None when they succeed and raise an ovirtsdk4.Error when they fail. A few action methods return values. For example, the service that manages a storage domain has an is_attached action method that checks whether the storage domain is already attached to a data center and returns a Boolean value: # Check if the storage domain is attached to a data center: sds_service = system_service.storage_domains_service() sd_service = sds_service.storage_domain_service('123') if sd_service.is_attached(): ... Check the reference documentation of the SDK to see the action methods supported by each service, the parameters that they take, and the values that they return. 2.7. Additional Resources For detailed information and examples, see the following resources: V4 REST API Guide Python SDK reference documentation Python SDK examples 2.7.1. Generating documentation for modules You can generate documentation using pydoc for the following modules: ovirtsdk.api ovirtsdk.infrastructure.brokers ovirtsdk.infrastructure.errors The documentation is provided by the ovirt-engine-sdk-python package. Run the following command on the Manager machine to view the latest version of these documents: USD pydoc [MODULE] | [
"vms_service = connection.system_service().vms_service()",
"import ovirtsdk4 as sdk connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', )",
"connection.close()",
"from ovirtsdk4 import types vm = types.Vm( name='vm1', cluster=types.Cluster( name='Default' ), template=types.Template( name='mytemplate' ), memory=1073741824 )",
"vm = types.Vm() vm.name = 'vm1' vm.cluster = types.Cluster(name='Default') vm.template = types.Template(name='mytemplate') vm.memory=1073741824",
"vm = types.Vm( name='vm1', custom_properties=[ types.CustomProperty(...), types.CustomProperty(...), ] )",
"if vm.status == types.VmStatus.DOWN: elif vm.status == types.VmStatus.IMAGE_LOCKED: .",
"print(\"vm.name: %s\" % vm.name) print(\"vm.memory: %s\" % vm.memory) for custom_property in vm.custom_properties:",
"<vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"> <name>vm1</name> <link rel=\"diskattachments\" href=\"/ovirt-engine/api/vms/123/diskattachments/> </vm>",
"Retrieve the virtual machine: vm = vm_service.get() Follow the link to the disk attachments, and then to the disks: attachments = connection.follow_link(vm.disk_attachments) for attachment in attachments: disk = connection.follow_link(attachment.disk) print(\"disk.alias: \" % disk.alias)",
"system_service = connection.system_service()",
"vms_service = system_service.vms_service()",
"vm_service = vms_service.vm_service('123')",
"Find the service that manages the virtual machine: vms_service = system_service.vms_service() vm_service = vms_service.vm_service('123') Retrieve the representation of the virtual machine: vm = vm_service.get()",
"Retrieve the representation of the virtual machine, not the current one, but the one that will be used after the next boot: vm = vm_service.get(next_run=True)",
"Call the service that manages a non-existent virtual machine. This call will succeed. vm_service = vms_service.vm_service('junk') Retrieve the virtual machine. This call will raise an exception. vm = vm_service.get()",
"Find the service that manages the collection of virtual machines: vms_service = system_service.vms_service() List the virtual machines in the collection vms = vms_service.list()",
"vms = vms_service.list(search='name=my*', max=10)",
"from ovirtsdk4 import types Add the virtual machine: vm = vms_service.add( vm=types.Vm( name='vm1', cluster=types.Cluster( name='Default' ), template=types.Template( name='mytemplate' ) ) )",
"Add the virtual machine: vm = vms_service.add( ) Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Start the virtual machine vm_service.start()",
"Add the virtual machine: vm = vms_service.add( ) Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Wait until the virtual machine is down, indicating that it is completely created: while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.DOWN: break",
"from ovirtsdk4 import types Find the virtual machine, and then the service that manages it: vm = vms_service.list(search='name=vm1')[0] vm_service = vm_service.vm_service(vm.id) Update the name: updated_vm = vm_service.update( vm=types.Vm( name='newvm' ) )",
"Retrieve the complete representation: vm = vm_service.get() Update the representation, in memory, without sending a request to the server: vm.name = 'newvm' Send the update. Do *not* do this. vms_service.update(vm)",
"Update the memory of the virtual machine to 1 GiB, not during the current run, but after next boot: vm = vm_service.update( vm=types.Vm( memory=1073741824 ), next_run=True )",
"Find the virtual machine by name: vm = vms_service.list(search='name=123')[0] Find the service that manages the virtual machine using the ID: vm_service = vms_service.vm_service(vm.id) Remove the virtual machine: vm_service.remove()",
"Remove the virtual machine while preserving the disks: vm_service.remove(detach_only=True)",
"Start the virtual machine: vm_service.start()",
"Start the virtual machine: vm_service.start(cloud_init=True)",
"Check if the storage domain is attached to a data center: sds_service = system_service.storage_domains_service() sd_service = sds_service.storage_domain_service('123') if sd_service.is_attached():",
"pydoc [MODULE]"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/python_sdk_guide/chap-using_the_software_development_kit |
Chapter 23. Vaults in IdM | Chapter 23. Vaults in IdM Learn more about vaults in Identity Management (IdM). 23.1. Vaults and their benefits A vault is a useful feature for those Identity Management (IdM) users who want to keep all their sensitive data stored securely but conveniently in one place. There are various types of vaults and you should choose which vault to use based on your requirements. A vault is a secure location in (IdM) for storing, retrieving, sharing, and recovering a secret. A secret is security-sensitive data, usually authentication credentials, that only a limited group of people or entities can access. For example, secrets include: Passwords PINs Private SSH keys A vault is comparable to a password manager. Just like a password manager, a vault typically requires a user to generate and remember one primary password to unlock and access any information stored in the vault. However, a user can also decide to have a standard vault. A standard vault does not require the user to enter any password to access the secrets stored in the vault. Note The purpose of vaults in IdM is to store authentication credentials that allow you to authenticate to external, non-IdM-related services. Other important characteristics of the IdM vaults are: Vaults are only accessible to the vault owner and those IdM users that the vault owner selects to be the vault members. In addition, the IdM administrator has access to the vault. If a user does not have sufficient privileges to create a vault, an IdM administrator can create the vault and set the user as its owner. Users and services can access the secrets stored in a vault from any machine enrolled in the IdM domain. One vault can only contain one secret, for example, one file. However, the file itself can contain multiple secrets such as passwords, keytabs or certificates. Note Vault is only available from the IdM command line (CLI), not from the IdM Web UI. 23.2. Vault owners, members, and administrators Identity Management (IdM) distinguishes the following vault user types: Vault owner A vault owner is a user or service with basic management privileges on the vault. For example, a vault owner can modify the properties of the vault or add new vault members. Each vault must have at least one owner. A vault can also have multiple owners. Vault member A vault member is a user or service that can access a vault created by another user or service. Vault administrator Vault administrators have unrestricted access to all vaults and are allowed to perform all vault operations. Note Symmetric and asymmetric vaults are protected with a password or key and apply special access control rules (see Vault types ). The administrator must meet these rules to: Access secrets in symmetric and asymmetric vaults. Change or reset the vault password or key. A vault administrator is any user with the Vault Administrators privilege. In the context of the role-based access control (RBAC) in IdM, a privilege is a group of permissions that you can apply to a role. Vault User The vault user represents the user in whose container the vault is located. The Vault user information is displayed in the output of specific commands, such as ipa vault-show : For details on vault containers and user vaults, see Vault containers . Additional resources See Standard, symmetric and asymmetric vaults for details on vault types. 23.3. Standard, symmetric, and asymmetric vaults Based on the level of security and access control, IdM classifies vaults into the following types: Standard vaults Vault owners and vault members can archive and retrieve the secrets without having to use a password or key. Symmetric vaults Secrets in the vault are protected with a symmetric key. Vault owners and members can archive and retrieve the secrets, but they must provide the vault password. Asymmetric vaults Secrets in the vault are protected with an asymmetric key. Users archive the secret using a public key and retrieve it using a private key. Vault members can only archive secrets, while vault owners can do both, archive and retrieve secrets. 23.4. User, service, and shared vaults Based on ownership, IdM classifies vaults into several types. The table below contains information about each type, its owner and use. Table 23.1. IdM vaults based on ownership Type Description Owner Note User vault A private vault for a user A single user Any user can own one or more user vaults if allowed by IdM administrator Service vault A private vault for a service A single service Any service can own one or more user vaults if allowed by IdM administrator Shared vault A vault shared by multiple users and services The vault administrator who created the vault Users and services can own one or more user vaults if allowed by IdM administrator. The vault administrators other than the one that created the vault also have full access to the vault. 23.5. Vault containers A vault container is a collection of vaults. The table below lists the default vault containers that Identity Management (IdM) provides. Table 23.2. Default vault containers in IdM Type Description Purpose User container A private container for a user Stores user vaults for a particular user Service container A private container for a service Stores service vaults for a particular service Shared container A container for multiple users and services Stores vaults that can be shared by multiple users or services IdM creates user and service containers for each user or service automatically when the first private vault for the user or service is created. After the user or service is deleted, IdM removes the container and its contents. 23.6. Basic IdM vault commands You can use the basic commands outlined below to manage Identity Management (IdM) vaults. The table below contains a list of ipa vault-* commands with the explanation of their purpose. Note Before running any ipa vault-* command, install the Key Recovery Authority (KRA) certificate system component on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . Table 23.3. Basic IdM vault commands with explanations Command Purpose ipa help vault Displays conceptual information about IdM vaults and sample vault commands. ipa vault-add --help , ipa vault-find --help Adding the --help option to a specific ipa vault-* command displays the options and detailed help available for that command. ipa vault-show user_vault --user idm_user When accessing a vault as a vault member, you must specify the vault owner. If you do not specify the vault owner, IdM informs you that it did not find the vault: ipa vault-show shared_vault --shared When accessing a shared vault, you must specify that the vault you want to access is a shared vault. Otherwise, IdM informs you it did not find the vault: 23.7. Installing the Key Recovery Authority in IdM Follow this procedure to enable vaults in Identity Management (IdM) by installing the Key Recovery Authority (KRA) Certificate System (CS) component on a specific IdM server. Prerequisites You are logged in as root on the IdM server. An IdM certificate authority is installed on the IdM server. You have the Directory Manager credentials. Procedure Install the KRA: Important You can install the first KRA of an IdM cluster on a hidden replica. However, installing additional KRAs requires temporarily activating the hidden replica before you install the KRA clone on a non-hidden replica. Then you can hide the originally hidden replica again. Note To make the vault service highly available and resilient, install the KRA on two IdM servers or more. Maintaining multiple KRA servers prevents data loss. Additional resources Demoting or promoting hidden replicas The hidden replica mode | [
"ipa vault-show my_vault Vault name: my_vault Type: standard Owner users: user Vault user: user",
"[admin@server ~]USD ipa vault-show user_vault ipa: ERROR: user_vault: vault not found",
"[admin@server ~]USD ipa vault-show shared_vault ipa: ERROR: shared_vault: vault not found",
"ipa-kra-install"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/vaults-in-idm_using-ansible-to-install-and-manage-identity-management |
Chapter 8. Installing a private cluster on IBM Cloud | Chapter 8. Installing a private cluster on IBM Cloud In OpenShift Container Platform version 4.18, you can install a private cluster into an existing VPC. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 8.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Create a DNS zone using IBM Cloud(R) DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud(R) DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 8.3. Private clusters in IBM Cloud To create a private cluster on IBM Cloud(R), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud(R) APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public ingress A public DNS zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 8.3.1. Limitations Private clusters on IBM Cloud(R) are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 8.4. About using a custom VPC In OpenShift Container Platform 4.18, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 8.4.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 8.4.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to the existing VPC. As part of the installation, specify the following in the install-config.yaml file: The name of the existing resource group that contains the VPC and subnets ( networkResourceGroupName ) The name of the existing VPC ( vpcName ) The subnets that were created for control plane machines and compute machines ( controlPlaneSubnets and computeSubnets ) Note Additional installer-provisioned cluster resources are deployed to a separate resource group ( resourceGroupName ). You can specify this resource group before installing the cluster. If undefined, a new resource group is created for the cluster. To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 8.4.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 8.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a bastion host on your cloud network or a machine that has access to the to the network through a VPN. For more information about private cluster installation requirements, see "Private clusters". Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 8.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 8.9.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.9.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 8.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 gx3d-160x1792x8h100 mx2-8x64 mx2d-4x32 mx3d-4x40 ox2-8x64 ux2d-2x56 vx2d-4x56 8.9.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-cluster-rg 13 networkResourceGroupName: eu-gb-example-existing-network-rg 14 vpcName: eu-gb-example-network-1 15 controlPlaneSubnets: 16 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 17 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 18 pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 1 8 12 19 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 14 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 15 Specify the name of an existing VPC. 16 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 17 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. The default value is External . 20 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 8.9.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 8.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.12. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 8.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 8.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-cluster-rg 13 networkResourceGroupName: eu-gb-example-existing-network-rg 14 vpcName: eu-gb-example-network-1 15 controlPlaneSubnets: 16 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 17 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 18 pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_cloud/installing-ibm-cloud-private |
Appendix A. Working with files encrypted using Ansible Vault | Appendix A. Working with files encrypted using Ansible Vault Red Hat recommends encrypting the contents of deployment and management files that contain passwords and other sensitive information. Ansible Vault is one method of encrypting these files. More information about Ansible Vault is available in the Ansible documentation . A.1. Encrypting files You can create an encrypted file by using the ansible-vault create command, or encrypt an existing file by using the ansible-vault encrypt command. When you create an encrypted file or encrypt an existing file, you are prompted to provide a password. This password is used to decrypt the file after encryption. You must provide this password whenever you work directly with information in this file or run a playbook that relies on the file's contents. Creating an encrypted file The ansible-vault create command prompts for a password for the new file, then opens the new file in the default text editor (defined as USDEDITOR in your shell environment) so that you can populate the file before saving it. If you have already created a file and you want to encrypt it, use the ansible-vault encrypt command. Encrypting an existing file A.2. Editing encrypted files You can edit an encrypted file using the ansible-vault edit command and providing the Vault password for that file. Editing an encrypted file The ansible-vault edit command prompts for a password for the file, then opens the file in the default text editor (defined as USDEDITOR in your shell environment) so that you can edit and save the file contents. A.3. Rekeying encrypted files to a new password You can change the password used to decrypt a file by using the ansible-vault rekey command. The ansible-vault rekey command prompts for the current Vault password, and then prompts you to set and confirm a new Vault password. | [
"ansible-vault create variables.yml New Vault password: Confirm New Vault password:",
"ansible-vault encrypt existing-variables.yml New Vault password: Confirm New Vault password: Encryption successful",
"ansible-vault edit variables.yml New Vault password: Confirm New Vault password:",
"ansible-vault rekey variables.yml Vault password: New Vault password: Confirm New Vault password: Rekey successful"
]
| https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/upgrading_red_hat_hyperconverged_infrastructure_for_virtualization/working-with-files-encrypted-using-ansible-vault |
Chapter 3. Management of hosts using the Ceph Orchestrator | Chapter 3. Management of hosts using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to add, list, and remove hosts in an existing Red Hat Ceph Storage cluster. You can also add labels to hosts. Labels are free-form and have no specific meanings. Each host can have multiple labels. For example, apply the mon label to all hosts that have monitor daemons deployed, mgr for all hosts with manager daemons deployed, rgw for Ceph object gateways, and so on. Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph Orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels. This section covers the following administrative tasks: Adding hosts using the Ceph Orchestrator . Adding multiple hosts using the Ceph Orchestrator . Listing hosts using the Ceph Orchestrator . Adding a label to a host . Removing a label from a host . Removing hosts using the Ceph Orchestrator . Placing hosts in the maintenance mode using the Ceph Orchestrator . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. The IP addresses of the new hosts should be updated in /etc/hosts file. 3.1. Adding hosts using the Ceph Orchestrator You can use the Ceph Orchestrator with Cephadm in the backend to add hosts to an existing Red Hat Ceph Storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Register the nodes to the CDN and attach subscriptions. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. Procedure From the Ceph administration node, log into the Cephadm shell: Example Extract the cluster's public SSH keys to a folder: Syntax Example Copy Ceph cluster's public SSH keys to the root user's authorized_keys file on the new host: Syntax Example From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is /usr/share/cephadm-ansible/hosts . The following example shows the structure of a typical inventory file: Example Note If you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 6. Run the preflight playbook with the --limit option: Syntax Example The preflight playbook installs podman , lvm2 , chronyd , and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory. From the Ceph administration node, log into the Cephadm shell: Example Use the cephadm orchestrator to add hosts to the storage cluster: Syntax The --label option is optional and this adds the labels when adding the hosts. You can add multiple labels to the host. Example Verification List the hosts: Example Additional Resources See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . For more information about the cephadm-preflight playbook, see Running the preflight playbook section in the Red Hat Ceph Storage Installation Guide . See the Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions section in the Red Hat Ceph Storage Installation Guide . See the Creating an Ansible user with sudo access section in the Red Hat Ceph Storage Installation Guide . 3.2. Adding multiple hosts using the Ceph Orchestrator You can use the Ceph Orchestrator to add multiple hosts to a Red Hat Ceph Storage cluster at the same time using the service specification in YAML file format. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Create the hosts.yaml file: Example Edit the hosts.yaml file to include the following details: Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Deploy the hosts using service specification: Syntax Example Verification List the hosts: Example Additional Resources See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 3.3. Listing hosts using the Ceph Orchestrator You can list hosts of a Ceph cluster with Ceph Orchestrators. Note The STATUS of the hosts is blank, in the output of the ceph orch host ls command. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the storage cluster. Procedure Log into the Cephadm shell: Example List the hosts of the cluster: Example You will see that the STATUS of the hosts is blank which is expected. 3.4. Adding a label to a host Use the Ceph Orchestrator to add a label to a host. Labels can be used to specify placement of daemons. A few examples of labels are mgr , mon , and osd based on the service deployed on the hosts. Each host can have multiple labels. You can also add the following host labels that have special meaning to cephadm and they begin with _ : _no_schedule : This label prevents cephadm from scheduling or deploying daemons on the host. If it is added to an existing host that already contains Ceph daemons, it causes cephadm to move those daemons elsewhere, except OSDs which are not removed automatically. When a host is added with the _no_schedule label, no daemons are deployed on it. When the daemons are drained before the host is removed, the _no_schedule label is set on that host. _no_autotune_memory : This label does not autotune memory on the host. It prevents the daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled for one or more daemons on that host. _admin : By default, the _admin label is applied to the bootstrapped host in the storage cluster and the client.admin key is set to be distributed to that host with the ceph orch client-keyring {ls|set|rm} function. Adding this label to additional hosts normally causes cephadm to deploy configuration and keyring files in the /etc/ceph directory. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Hosts are added to the storage cluster. Procedure Log in to the Cephadm shell: Example Add a label to a host: Syntax Example Verification List the hosts: Example 3.5. Removing a label from a host You can use the Ceph orchestrator to remove a label from a host. Prerequisites A storage cluster that has been installed and bootstrapped. Root-level access to all nodes in the storage cluster. Procedure Launch the cephadm shell: Remove the label. Syntax Example Verification List the hosts: Example 3.6. Removing hosts using the Ceph Orchestrator You can remove hosts of a Ceph cluster with the Ceph Orchestrators. All the daemons are removed with the drain option which adds the _no_schedule label to ensure that you cannot deploy any daemons or a cluster till the operation is complete. Important If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the storage cluster. All the services are deployed. Cephadm is deployed on the nodes where the services have to be removed. Procedure Log into the Cephadm shell: Example Fetch the host details: Example Drain all the daemons from the host: Syntax Example The _no_schedule label is automatically applied to the host which blocks deployment. Check the status of OSD removal: Example When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster. Check if all the daemons are removed from the storage cluster: Syntax Example Remove the host: Syntax Example Additional Resources See the Adding hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. 3.7. Placing hosts in the maintenance mode using the Ceph Orchestrator You can use the Ceph Orchestrator to place the hosts in and out of the maintenance mode. The ceph orch host maintenance enter command stops the systemd target which causes all the Ceph daemons to stop on the host. Similarly, the ceph orch host maintenance exit command restarts the systemd target and the Ceph daemons restart on their own. The orchestrator adopts the following workflow when the host is placed in maintenance: Confirms the removal of hosts does not impact data availability by running the orch host ok-to-stop command. If the host has Ceph OSD daemons, it applies noout to the host subtree to prevent data migration from triggering during the planned maintenance slot. Stops the Ceph target, thereby, stopping all the daemons. Disables the ceph target on the host, to prevent a reboot from automatically starting Ceph services. Exiting maintenance reverses the above sequence. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts added to the cluster. Procedure Log into the Cephadm shell: Example You can either place the host in maintenance mode or place it out of the maintenance mode: Place the host in maintenance mode: Syntax Example The --force flag allows the user to bypass warnings, but not alerts. Place the host out of the maintenance mode: Syntax Example Verification List the hosts: Example | [
"cephadm shell",
"ceph cephadm get-pub-key > ~/ PATH",
"ceph cephadm get-pub-key > ~/ceph.pub",
"ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2",
"ssh-copy-id -f -i ~/ceph.pub root@host02",
"host01 host02 host03 [admin] host00",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"cephadm shell",
"ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST [--label= LABEL_NAME_1 , LABEL_NAME_2 ]",
"ceph orch host add host02 10.10.128.70 --labels=mon,mgr",
"ceph orch host ls",
"touch hosts.yaml",
"service_type: host addr: host01 hostname: host01 labels: - mon - osd - mgr --- service_type: host addr: host02 hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: host03 hostname: host03 labels: - mon - osd",
"cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml",
"cd /var/lib/ceph/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i hosts.yaml",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label add HOSTNAME LABEL",
"ceph orch host label add host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host label rm HOSTNAME LABEL",
"ceph orch host label rm host02 mon",
"ceph orch host ls",
"cephadm shell",
"ceph orch host ls",
"ceph orch host drain HOSTNAME",
"ceph orch host drain host02",
"ceph orch osd rm status",
"ceph orch ps HOSTNAME",
"ceph orch ps host02",
"ceph orch host rm HOSTNAME",
"ceph orch host rm host02",
"cephadm shell",
"ceph orch host maintenance enter HOST_NAME [--force]",
"ceph orch host maintenance enter host02 --force",
"ceph orch host maintenance exit HOST_NAME",
"ceph orch host maintenance exit host02",
"ceph orch host ls"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/operations_guide/management-of-hosts-using-the-ceph-orchestrator |
Chapter 1. Introduction | Chapter 1. Introduction 1.1. Virtualized and Non-Virtualized Environments A virtualized environment presents opportunities for both the discovery of new attack vectors and the refinement of existing exploits that may not previously have presented value to an attacker. Therefore, it is important to take steps to ensure the security of both the physical hosts and the guests running on them when creating and maintaining virtual machines. Non-Virtualized Environment In a non-virtualized environment, hosts are separated from each other physically and each host has a self-contained environment, which consists of services such as a web server, or a DNS server. These services communicate directly to their own user space, host kernel and physical host, offering their services directly to the network. Figure 1.1. Non-Virtualized Environment Virtualized Environment In a virtualized environment, several operating systems can be housed (as guest virtual machines) within a single host kernel and physical host. Figure 1.2. Virtualized Environment When services are not virtualized, machines are physically separated. Any exploit is, therefore, usually contained to the affected machine, with the exception of network attacks. When services are grouped together in a virtualized environment, extra vulnerabilities emerge in the system. If a security flaw exists in the hypervisor that can be exploited by a guest instance, this guest may be able to attack the host, as well as other guests running on that host. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_security_guide/chap-virtualization_security_guide-introduction |
4.3. Configuring Redundant Ring Protocol (RRP) | 4.3. Configuring Redundant Ring Protocol (RRP) Note Red Hat supports the configuration of Redundant Ring Protocol (RRP) in clusters subject to the conditions described in the "Redundant Ring Protocol (RRP)" section of Support Policies for RHEL High Availability Clusters - Cluster Interconnect Network Interfaces . When you create a cluster with the pcs cluster setup command, you can configure a cluster with Redundant Ring Protocol by specifying both interfaces for each node. When using the default udpu transport, when you specify the cluster nodes you specify the ring 0 address followed by a ',', then the ring 1 address. For example, the following command configures a cluster named my_rrp_clusterM with two nodes, node A and node B. Node A has two interfaces, nodeA-0 and nodeA-1 . Node B has two interfaces, nodeB-0 and nodeB-1 . To configure these nodes as a cluster using RRP, execute the following command. For information on configuring RRP in a cluster that uses udp transport, see the help screen for the pcs cluster setup command. | [
"pcs cluster setup --name my_rrp_cluster nodeA-0,nodeA-1 nodeB-0,nodeB-1"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-configrrp-haar |
Release Notes for Streams for Apache Kafka 2.9 on OpenShift | Release Notes for Streams for Apache Kafka 2.9 on OpenShift Red Hat Streams for Apache Kafka 2.9 Highlights of what's new and what's changed with this release of Streams for Apache Kafka on OpenShift Container Platform | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_openshift/index |
Chapter 4. Control Group Application Examples | Chapter 4. Control Group Application Examples This chapter provides application examples that take advantage of the cgroup functionality. 4.1. Prioritizing Database I/O Running each instance of a database server inside its own dedicated virtual guest allows you to allocate resources per database based on their priority. Consider the following example: a system is running two database servers inside two KVM guests. One of the databases is a high priority database and the other one a low priority database. When both database servers are run simultaneously, the I/O throughput is decreased to accommodate requests from both databases equally; Figure 4.1, "I/O throughput without resource allocation" indicates this scenario - once the low priority database is started (around time 45), I/O throughput is the same for both database servers. Figure 4.1. I/O throughput without resource allocation To prioritize the high priority database server, it can be assigned to a cgroup with a high number of reserved I/O operations, whereas the low priority database server can be assigned to a cgroup with a low number of reserved I/O operations. To achieve this, follow the steps in Procedure 4.1, "I/O Throughput Prioritization" , all of which are performed on the host system. Procedure 4.1. I/O Throughput Prioritization Make sure resource accounting is on for both services: Set a ratio of 10:1 for the high and low priority services. Processes running in those service units will use only the resources made available to them Figure 4.2, "I/O throughput with resource allocation" illustrates the outcome of limiting the low priority database and prioritizing the high priority database. As soon as the database servers are moved to their appropriate cgroups (around time 75), I/O throughput is divided between both servers with the ratio of 10:1. Figure 4.2. I/O throughput with resource allocation Alternatively, block device I/O throttling can be used for the low priority database to limit its number of read and write operations. For more information, see the description of the blkio controller in Controller-Specific Kernel Documentation . | [
"~]# systemctl set-property db1.service BlockIOAccounting = true ~]# systemctl set-property db2.service BlockIOAccounting = true",
"~]# systemctl set-property db1.service BlockIOWeight = 1000 ~]# systemctl set-property db2.service BlockIOWeight = 100"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/chap-control_group_application_examples |
Chapter 3. Upgrading from a Streams version before 1.7 | Chapter 3. Upgrading from a Streams version before 1.7 The v1beta2 API version for all custom resources was introduced with Streams for Apache Kafka 1.7. For Streams for Apache Kafka 1.8, v1alpha1 and v1beta1 API versions were removed from all Streams for Apache Kafka custom resources apart from KafkaTopic and KafkaUser . Upgrade of the custom resources to v1beta2 prepares Streams for Apache Kafka for a move to Kubernetes CRD v1 , which is required for Kubernetes 1.22. If you are upgrading from a Streams for Apache Kafka version prior to version 1.7: Upgrade to Streams for Apache Kafka 1.7 Convert the custom resources to v1beta2 Upgrade to Streams for Apache Kafka 1.8 Important You must upgrade your custom resources to use API version v1beta2 before upgrading to Streams for Apache Kafka version 2.9. 3.1. Upgrading custom resources to v1beta2 To support the upgrade of custom resources to v1beta2 , Streams for Apache Kafka provides an API conversion tool , which you can download from the Streams for Apache Kafka 1.8 software downloads page . You perform the custom resources upgrades in two steps. Step one: Convert the format of custom resources Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways: Converting the YAML files that describe the configuration for Streams for Apache Kafka custom resources Converting Streams for Apache Kafka custom resources directly in the cluster Alternatively, you can manually convert each custom resource into a format applicable to v1beta2 . Instructions for manually converting custom resources are included in the documentation. Step two: Upgrade CRDs to v1beta2 , using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually. For more information, see Upgrading from a Streams for Apache Kafka version earlier than 1.7 . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_openshift/con-api-version-updates-str |
Chapter 11. Configuration of SNMP traps | Chapter 11. Configuration of SNMP traps As a storage administrator, you can deploy and configure the simple network management protocol (SNMP) gateway in a Red Hat Ceph Storage cluster to receive alerts from the Prometheus Alertmanager and route them as SNMP traps to the cluster. 11.1. Simple network management protocol Simple network management protocol (SNMP) is one of the most widely used open protocols, to monitor distributed systems and devices across a variety of hardware and software platforms. Ceph's SNMP integration focuses on forwarding alerts from its Prometheus Alertmanager cluster to a gateway daemon. The gateway daemon transforms the alert into an SNMP Notification and sends it on to a designated SNMP management platform. The gateway daemon is from the snmp_notifier_project , which provides SNMP V2c and V3 support with authentication and encryption. The Red Hat Ceph Storage SNMP gateway service deploys one instance of the gateway by default. You can increase this by providing placement information. However, if you enable multiple SNMP gateway daemons, your SNMP management platform receives multiple notifications for the same event. The SNMP traps are alert messages and the Prometheus Alertmanager sends these alerts to the SNMP notifier which then looks for object identifier (OID) in the given alerts' labels. Each SNMP trap has a unique ID which allows it to send additional traps with updated status to a given SNMP poller. SNMP hooks into the Ceph health checks so that every health warning generates a specific SNMP trap. In order to work correctly and transfer information on device status to the user to monitor, SNMP relies on several components. There are four main components that makeup SNMP: SNMP Manager - The SNMP manager, also called a management station, is a computer that runs network monitoring platforms. A platform that has the job of polling SNMP-enabled devices and retrieving data from them. An SNMP Manager queries agents, receives responses from agents and acknowledges asynchronous events from agents. SNMP Agent - An SNMP agent is a program that runs on a system to be managed and contains the MIB database for the system. These collect data like bandwidth and disk space, aggregates it, and sends it to the management information base (MIB). Management information base (MIB) - These are components contained within the SNMP agents. The SNMP manager uses this as a database and asks the agent for access to particular information. This information is needed for the network management systems (NMS). The NMS polls the agent to take information from these files and then proceeds to translate it into graphs and displays that can be viewed by the user. MIBs contain statistical and control values that are determined by the network device. SNMP Devices The following versions of SNMP are compatible and supported for gateway implementation: V2c - Uses a community string without any authentication and is vulnerable to outside attacks. V3 authNoPriv - Uses the username and password authentication without encryption. V3 authPriv - Uses the username and password authentication with encryption to the SNMP management platform. Important When using SNMP traps, ensure that you have the correct security configuration for your version number to minimize the vulnerabilities that are inherent to SNMP and keep your network protected from unauthorized users. 11.2. Configuring snmptrapd It is important to configure the simple network management protocol (SNMP) target before deploying the snmp-gateway because the snmptrapd daemon contains the auth settings that you need to specify when creating the snmp-gateway service. The SNMP gateway feature provides a means of exposing the alerts that are generated in the Prometheus stack to an SNMP management platform. You can configure the SNMP traps to the destination based on the snmptrapd tool. This tool allows you to establish one or more SNMP trap listeners. The following parameters are important for configuration: The engine-id is a unique identifier for the device, in hex, and required for SNMPV3 gateway. Red Hat recommends using `8000C53F_CLUSTER_FSID_WITHOUT_DASHES_`for this parameter. The snmp-community , which is the SNMP_COMMUNITY_FOR_SNMPV2 parameter, is public for SNMPV2c gateway. The auth-protocol which is the AUTH_PROTOCOL , is mandatory for SNMPV3 gateway and is SHA by default. The privacy-protocol , which is the PRIVACY_PROTOCOL , is mandatory for SNMPV3 gateway. The PRIVACY_PASSWORD is mandatory for SNMPV3 gateway with encryption. The SNMP_V3_AUTH_USER_NAME is the user name and is mandatory for SNMPV3 gateway. The SNMP_V3_AUTH_PASSWORD is the password and is mandatory for SNMPV3 gateway. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Install firewalld on Red Hat Enterprise Linux system. Procedure On the SNMP management host, install the SNMP packages: Example Open the port 162 for SNMP to receive alerts: Example Implement the management information base (MIB) to make sense of the SNMP notification and enhance SNMP support on the destination host. Copy the raw file from the main repository: https://github.com/ceph/ceph/blob/master/monitoring/snmp/CEPH-MIB.txt Example Create the snmptrapd directory. Example Create the configuration files in snmptrapd directory for each protocol based on the SNMP version: Syntax For SNMPV2c, create the snmptrapd_public.conf file as follows: Example The public setting here must match the snmp_community setting used when deploying the snmp-gateway service. For SNMPV3 with authentication only, create the snmptrapd_auth.conf file as follows: Example The 0x8000C53Ff64f341c655d11eb8778fa163e914bcc string is the engine_id , and myuser and mypassword are the credentials. The password security is defined by the SHA algorithm. This corresponds to the settings for deploying the snmp-gateway daemon. Example For SNMPV3 with authentication and encryption, create the snmptrapd_authpriv.conf file as follows: Example The 0x8000C53Ff64f341c655d11eb8778fa163e914bcc string is the engine_id , and myuser and mypassword are the credentials. The password security is defined by the SHA algorithm and DES is the type of privacy encryption. This corresponds to the settings for deploying the snmp-gateway daemon. Example Run the daemon on the SNMP management host: Syntax Example If any alert is triggered on the storage cluster, you can monitor the output on the SNMP management host. Verify the SNMP traps and also the traps decoded by MIB. Example In the above example, an alert is generated after the Prometheus module is disabled. Additional Resources See the Deploying the SNMP gateway section in the Red Hat Ceph Storage Operations Guide . 11.3. Deploying the SNMP gateway You can deploy the simple network management protocol (SNMP) gateway using either SNMPV2c or SNMPV3. There are two methods to deploy the SNMP gateway: By creating a credentials file. By creating one service configuration yaml file with all the details. You can use the following parameters to deploy the SNMP gateway based on the versions: The service_type is the snmp-gateway . The service_name is any user-defined string. The count is the number of SNMP gateways to be deployed in a storage cluster. The snmp_destination parameter must be of the format hostname:port. The engine-id is a unique identifier for the device, in hex, and required for SNMPV3 gateway. Red Hat recommends to use `8000C53F_CLUSTER_FSID_WITHOUT_DASHES_`for this parameter. The snmp_community parameter is public for SNMPV2c gateway. The auth-protocol is mandatory for SNMPV3 gateway and is SHA by default. The privacy-protocol is mandatory for SNMPV3 gateway with authentication and encryption. The port is 9464 by default. You must provide a -i FILENAME to pass the secrets and passwords to the orchestrator. Once the SNMP gateway service is deployed or updated, the Prometheus Alertmanager configuration is automatically updated to forward any alert that has an objectidentifier to the SNMP gateway daemon for further processing. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Configuring snmptrapd on the destination host, which is the SNMP management host. Procedure Log into the Cephadm shell: Example Create a label for the host on which SNMP gateway needs to be deployed: Syntax Example Create a credentials file or a service configuration file based on the SNMP version: For SNMPV2c, create the file as follows: Example OR Example For SNMPV3 with authentication only, create the file as follows: Example OR Example For SNMPV3 with authentication and encryption, create the file as follows: Example OR Example Run the ceph orch command: Syntax OR Syntax For SNMPV2c, with the snmp_creds file, run the ceph orch command with the snmp-version as V2c : Example For SNMPV3 with authentication only, with the snmp_creds file, run the ceph orch command with the snmp-version as V3 and engine-id : Example For SNMPV3 with authentication and encryption, with the snmp_creds file, run the ceph orch command with the snmp-version as V3 , privacy-protocol , and engine-id : Example OR For all the SNMP versions, with the snmp-gateway file, run the following command: Example Additional Resources See the Configuring `snmptrapd` section in the Red Hat Ceph Storage Operations Guide . | [
"dnf install -y net-snmp-utils net-snmp",
"firewall-cmd --zone=public --add-port=162/udp firewall-cmd --zone=public --add-port=162/udp --permanent",
"curl -o CEPH_MIB.txt -L https://raw.githubusercontent.com/ceph/ceph/master/monitoring/snmp/CEPH-MIB.txt scp CEPH_MIB.txt root@host02:/usr/share/snmp/mibs",
"mkdir /root/snmptrapd/",
"format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x_ENGINE_ID_ SNMPV3_AUTH_USER_NAME AUTH_PROTOCOL SNMP_V3_AUTH_PASSWORD PRIVACY_PROTOCOL PRIVACY_PASSWORD authuser log,execute SNMP_V3_AUTH_USER_NAME authCommunity log,execute,net SNMP_COMMUNITY_FOR_SNMPV2",
"format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n authCommunity log,execute,net public",
"format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x8000C53Ff64f341c655d11eb8778fa163e914bcc myuser SHA mypassword authuser log,execute myuser",
"snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword",
"format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x8000C53Ff64f341c655d11eb8778fa163e914bcc myuser SHA mypassword DES mysecret authuser log,execute myuser",
"snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword snmp_v3_priv_password: mysecret",
"/usr/sbin/snmptrapd -M /usr/share/snmp/mibs -m CEPH-MIB.txt -f -C -c /root/snmptrapd/ CONFIGURATION_FILE -Of -Lo :162",
"/usr/sbin/snmptrapd -M /usr/share/snmp/mibs -m CEPH-MIB.txt -f -C -c /root/snmptrapd/snmptrapd_auth.conf -Of -Lo :162",
"NET-SNMP version 5.8 Agent Address: 0.0.0.0 Agent Hostname: <UNKNOWN> Date: 15 - 5 - 12 - 8 - 10 - 4461391 Enterprise OID: . Trap Type: Cold Start Trap Sub-Type: 0 Community/Infosec Context: TRAP2, SNMP v3, user myuser, context Uptime: 0 Description: Cold Start PDU Attribute/Value Pair Array: .iso.org.dod.internet.mgmt.mib-2.1.3.0 = Timeticks: (292276100) 3 days, 19:52:41.00 .iso.org.dod.internet.snmpV2.snmpModules.1.1.4.1.0 = OID: .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.1 = STRING: \"1.3.6.1.4.1.50495.1.2.1.6.2[alertname=CephMgrPrometheusModuleInactive]\" .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.2 = STRING: \"critical\" .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.3 = STRING: \"Status: critical - Alert: CephMgrPrometheusModuleInactive Summary: Ceph's mgr/prometheus module is not available Description: The mgr/prometheus module at 10.70.39.243:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down. Without the mgr/prometheus module metrics and alerts will no longer function. Open a shell to ceph and use 'ceph -s' to determine whether the mgr is active. If the mgr is not active, restart it, otherwise you can check the mgr/prometheus module is loaded with 'ceph mgr module ls' and if it's not listed as enabled, enable it with 'ceph mgr module enable prometheus'\"",
"cephadm shell",
"ceph orch host label add HOSTNAME snmp-gateway",
"ceph orch host label add host02 snmp-gateway",
"cat snmp_creds.yml snmp_community: public",
"cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_community: public port: 9464 snmp_destination: 192.168.122.73:162 snmp_version: V2c",
"cat snmp_creds.yml snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword",
"cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_v3_auth_password: mypassword snmp_v3_auth_username: myuser engine_id: 8000C53Ff64f341c655d11eb8778fa163e914bcc port: 9464 snmp_destination: 192.168.122.1:162 snmp_version: V3",
"cat snmp_creds.yml snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword snmp_v3_priv_password: mysecret",
"cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_v3_auth_password: mypassword snmp_v3_auth_username: myuser snmp_v3_priv_password: mysecret engine_id: 8000C53Ff64f341c655d11eb8778fa163e914bcc port: 9464 snmp_destination: 192.168.122.1:162 snmp_version: V3",
"ceph orch apply snmp-gateway --snmp_version= V2c_OR_V3 --destination= SNMP_DESTINATION [--port= PORT_NUMBER ] [--engine-id=8000C53F_CLUSTER_FSID_WITHOUT_DASHES_] [--auth-protocol= MDS_OR_SHA ] [--privacy_protocol= DES_OR_AES ] -i FILENAME",
"ceph orch apply -i FILENAME .yml",
"ceph orch apply snmp-gateway --snmp-version=V2c --destination=192.168.122.73:162 --port=9464 -i snmp_creds.yml",
"ceph orch apply snmp-gateway --snmp-version=V3 --engine-id=8000C53Ff64f341c655d11eb8778fa163e914bcc--destination=192.168.122.73:162 -i snmp_creds.yml",
"ceph orch apply snmp-gateway --snmp-version=V3 --engine-id=8000C53Ff64f341c655d11eb8778fa163e914bcc--destination=192.168.122.73:162 --privacy-protocol=AES -i snmp_creds.yml",
"ceph orch apply -i snmp-gateway.yml"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/operations_guide/configuration-of-snmp-traps |
Chapter 6. Camel K trait configuration reference | Chapter 6. Camel K trait configuration reference This chapter provides reference information about advanced features and core capabilities that you can configure on the command line at runtime using traits . Camel K provides feature traits to configure specific features and technologies. Camel K provides platform traits to configure internal Camel K core capabilities. Important The Red Hat Integration - Camel K 1.6 includes the OpenShift and Knative profiles. The Kubernetes profile has community-only support. It also includes Java, and YAML DSL support for integrations. Other languages such as XML, Groovy, JavaScript, and Kotlin have community-only support. This chapter includes the following sections: Camel K feature traits Section 6.2.2, "Knative Trait" - Technology Preview Section 6.2.3, "Knative Service Trait" - Technology Preview Section 6.2.9, "Prometheus Trait" Section 6.2.10, "Pdb Trait" Section 6.2.11, "Pull Secret Trait" Section 6.2.12, "Route Trait" Section 6.2.13, "Service Trait" Camel K core platform traits Section 6.3.1, "Builder Trait" Section 6.3.3, "Camel Trait" Section 6.3.2, "Container Trait" Section 6.3.4, "Dependencies Trait" Section 6.3.5, "Deployer Trait" Section 6.3.6, "Deployment Trait" Section 6.3.7, "Environment Trait" Section 6.3.8, "Error Handler Trait" Section 6.3.9, "Jvm Trait" Section 6.3.10, "Kamelets Trait" Section 6.3.11, "NodeAffinity Trait" Section 6.3.12, "Openapi Trait" - Technology Preview Section 6.3.13, "Owner Trait" Section 6.3.14, "Platform Trait" Section 6.3.15, "Quarkus Trait" 6.1. Camel K trait and profile configuration This section explains the important Camel K concepts of traits and profiles , which are used to configure advanced Camel K features at runtime. Camel K traits Camel K traits are advanced features and core capabilities that you can configure on the command line to customize Camel K integrations. For example, this includes feature traits that configure interactions with technologies such as 3scale API Management, Quarkus, Knative, and Prometheus. Camel K also provides internal platform traits that configure important core platform capabilities such as Camel support, containers, dependency resolution, and JVM support. Camel K profiles Camel K profiles define the target cloud platforms on which Camel K integrations run. Supported profiles are OpenShift and Knative profiles. Note When you run an integration on OpenShift, Camel K uses the Knative profile when OpenShift Serverless is installed on the cluster. Camel K uses the OpenShift profile when OpenShift Serverless is not installed. You can also specify the profile at runtime using the kamel run --profile option. Camel K provides useful defaults for all traits, taking into account the target profile on which the integration runs. However, advanced users can configure Camel K traits for custom behavior. Some traits only apply to specific profiles such as OpenShift or Knative . For more details, see the available profiles in each trait description. Camel K trait configuration Each Camel trait has a unique ID that you can use to configure the trait on the command line. For example, the following command disables creating an OpenShift Service for an integration: kamel run --trait service.enabled=false my-integration.yaml You can also use the -t option to specify traits. Camel K trait properties You can use the enabled property to enable or disable each trait. All traits have their own internal logic to determine if they need to be enabled when the user does not activate them explicitly. Warning Disabling a platform trait may compromise the platform functionality. Some traits have an auto property, which you can use to enable or disable automatic configuration of the trait based on the environment. For example, this includes traits such as 3scale, Cron, and Knative. This automatic configuration can enable or disable the trait when the enabled property is not explicitly set, and can change the trait configuration. Most traits have additional properties that you can configure on the command line. For more details, see the descriptions for each trait in the sections that follow. 6.2. Camel K feature traits 6.2.1. Health Trait The health trait is responsible for configuring the health probes on the integration container. It is disabled by default. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.1.1. Configuration Trait properties are specified when running any integration by using the following CLI. USD kamel run --trait health.[key]=[value] --trait health.[key2]=[value2] integration.java The following configuration options are available. Property Type Description health.enabled bool Can be used to enable or disable a trait. All traits share this common property. health.liveness-probe-enabled bool Configures the liveness probe for the integration container (default false ). health.liveness-scheme string Scheme to use when connecting to the liveness probe (default HTTP ). health.liveness-initial-delay int32 Number of seconds after the container has started before the liveness probe is initiated. health.liveness-timeout int32 Number of seconds after which the liveness probe times out. health.liveness-period int32 How often to perform the liveness probe. health.liveness-success-threshold int32 Minimum consecutive successes for the liveness probe to be considered successful after having failed. health.liveness-failure-threshold int32 Minimum consecutive failures for the liveness probe to be considered failed after having succeeded. health.readiness-probe-enabled bool Configures the readiness probe for the integration container (default true ). health.readiness-scheme string Scheme to use when connecting to the readiness probe (default HTTP ). health.readiness-initial-delay int32 Number of seconds after the container has started before the readiness probe is initiated. health.readiness-timeout int32 Number of seconds after which the readiness probe times out. health.readiness-period int32 How often to perform the readiness probe. health.readiness-success-threshold int32 Minimum consecutive successes for the readiness probe to be considered successful after having failed. health.readiness-failure-threshold int32 Minimum consecutive failures for the readiness probe to be considered failed after having succeeded. health.startup-probe-enabled bool Configures the startup probe for the integration container (default false ). health.startup-scheme string Scheme to use when connecting to the startup probe (default HTTP ). health.startup-initial-delay int32 Number of seconds after the container has started before the startup probe is initiated. health.startup-timeout int32 Number of seconds after which the startup probe times out. health.startup-period int32 How often to perform the startup probe. health.startup-success-threshold int32 Minimum consecutive successes for the startup probe to be considered successful after having failed. health.startup-failure-threshold int32 Minimum consecutive failures for the startup probe to be considered failed after having succeeded. 6.2.2. Knative Trait The Knative trait automatically discovers addresses of Knative resources and inject them into the running integration. The full Knative configuration is injected in the CAMEL_KNATIVE_CONFIGURATION in JSON format. The Camel Knative component will then use the full configuration to configure the routes. The trait is enabled by default when the Knative profile is active. This trait is available in the following profiles: Knative . 6.2.2.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait knative.[key]=[value] --trait knative.[key2]=[value2] integration.java The following configuration options are available: Property Type Description knative.enabled bool Can be used to enable or disable a trait. All traits share this common property. knative.configuration string Can be used to inject a Knative complete configuration in JSON format. knative.channel-sources []string List of channels used as source of integration routes. Can contain simple channel names or full Camel URIs. knative.channel-sinks []string List of channels used as destination of integration routes. Can contain simple channel names or full Camel URIs. knative.endpoint-sources []string List of channels used as source of integration routes. knative.endpoint-sinks []string List of endpoints used as destination of integration routes. Can contain simple endpoint names or full Camel URIs. knative.event-sources []string List of event types that the integration will be subscribed to. Can contain simple event types or full Camel URIs (to use a specific broker different from "default"). knative.event-sinks []string List of event types that the integration will produce. Can contain simple event types or full Camel URIs (to use a specific broker). knative.filter-source-channels bool Enables filtering on events based on the header "ce-knativehistory". Since this header has been removed in newer versions of Knative, filtering is disabled by default. knative.sink-binding bool Allows binding the integration to a sink via a Knative SinkBinding resource. This can be used when the integration targets a single sink. It's enabled by default when the integration targets a single sink (except when the integration is owned by a Knative source). knative.auto bool Enable automatic discovery of all trait properties. 6.2.3. Knative Service Trait The Knative Service trait allows to configure options when running the integration as Knative service instead of a standard Kubernetes Deployment. Running integrations as Knative Services adds auto-scaling (and scaling-to-zero) features, but those features are only meaningful when the routes use a HTTP endpoint consumer. This trait is available in the following profiles: Knative . 6.2.3.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait knative-service.[key]=[value] --trait knative-service.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description knative-service.enabled bool Can be used to enable or disable a trait. All traits share this common property. knative-service.annotations map[string]string The annotations are added to route. This can be used to set knative service specific annotations. For more details see, Route Specific Annotations . CLI usage example: -t "knative-service.annotations.'haproxy.router.openshift.io/balance'=roundrobin" knative-service.autoscaling-class string Configures the Knative autoscaling class property (e.g. to set hpa.autoscaling.knative.dev or kpa.autoscaling.knative.dev autoscaling). Refer to the Knative documentation for more information. knative-service.autoscaling-metric string Configures the Knative autoscaling metric property (e.g. to set concurrency based or cpu based autoscaling). Refer to the Knative documentation for more information. knative-service.autoscaling-target int Sets the allowed concurrency level or CPU percentage (depending on the autoscaling metric) for each Pod. Refer to the Knative documentation for more information. knative-service.min-scale int The minimum number of Pods that should be running at any time for the integration. It's zero by default, meaning that the integration is scaled down to zero when not used for a configured amount of time. Refer to the Knative documentation for more information. knative-service.max-scale int An upper bound for the number of Pods that can be running in parallel for the integration. Knative has its own cap value that depends on the installation. Refer to the Knative documentation for more information. knative-service.auto bool Automatically deploy the integration as Knative service when all conditions hold: Integration is using the Knative profile All routes are either starting from a HTTP based consumer or a passive consumer (e.g. direct is a passive consumer) 6.2.4. Logging Trait The Logging trait is used to configure Integration runtime logging options (such as color and format). The logging backend is provided by Quarkus, whose configuration is documented at https://quarkus.io/guides/logging . This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.4.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait logging.[key]=[value] --trait logging.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description logging.enabled bool Can be used to enable or disable a trait. All traits share this common property. logging.color bool Colorize the log output logging.format string Logs message format logging.level string Adjust the logging level (defaults to INFO) logging.json bool Output the logs in JSON logging.json-pretty-print bool Enable "pretty printing" of the JSON logs 6.2.5. Master Trait The Master trait allows to configure the integration to automatically leverage Kubernetes resources for leader election and starting master routes only on certain instances. It is activated automatically when using the master endpoint in a route. For example: from("master:lockname:telegram:bots")... . Note This trait adds special permissions to the integration service account to read/write configmaps and read pods. It is recommended to use a different service account than "default" when running the integration. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.5.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait master.[key]=[value] --trait master.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description master.enabled bool Can be used to enable or disable a trait. All traits share this common property. master.auto bool Enables automatic configuration of the trait. master.include-delegate-dependencies bool When this flag is active, the operator analyzes the source code to add dependencies required by delegate endpoints. For example: when using master:lockname:timer , then camel:timer is automatically added to the set of dependencies. It is enabled by default. master.resource-name string Name of the configmap/lease resource that will be used to store the lock. Defaults to "<integration-name>-lock". master.resource-type string Type of Kubernetes resource to use for locking ("ConfigMap" or "Lease"). Defaults to "Lease". master.label-key string Label that will be used to identify all pods contending the lock. Defaults to "camel.apache.org/integration". master.label-value string Label value that will be used to identify all pods contending the lock. Defaults to the integration name. 6.2.6. Mount Trait The Mount trait can be used to configure volumes mounted on the Integration Pods. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Note The mount trait is a platform trait and cannot be disabled by the user. 6.2.6.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait mount.[key]=[value] --trait mount.[key2]=[value2] integration.java The following configuration options are available: Property Type Description mount.enabled bool Deprecated: no longer in use. mount.configs []string A list of configuration pointing to configmap/secret. The configuration are expected to be UTF-8 resources as they are processed by runtime Camel Context and tried to be parsed as property files. They are also made available on the classpath in order to ease their usage directly from the Route. mount.resources []string A list of resources (text or binary content) pointing to configmap/secret. The resources are expected to be any resource type (text or binary content). The destination path can be either a default location or any path specified by the user. mount.volumes []string A list of Persistent Volume Claims to be mounted. Syntax: mount.hot-reload bool Enable "hot reload" when a secret/configmap mounted is edited (default false ) Note The syntax for mount.configs property is, Syntax: [configmap | secret]:name[/key] , where name represents the resource name and key optionally represents the resource key to be filtered. The syntax for mount.resources property is, Syntax: [configmap | secret]:name[/key] [@path] , where name represents the resource name, key optionally represents the resource key to be filtered and path represents the destination path. 6.2.7. Telemetry Trait Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The Telemetry trait can be used to automatically publish tracing information to an OTLP compatible collector. The trait is able to automatically discover the telemetry OTLP endpoint available in the namespace (supports Jaerger in version 1.35+). The Telemetry trait is disabled by default. Warning The Telemetry trait cannot be enabled at the same time as the Tracing trait. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.7.1. Configuration Trait properties can be specified when running any integration with the CLI. USD kamel run --trait telemetry.[key]=[value] --trait telemetry.[key2]=[value2] integration.java The following configuration options are available: Property Type Description telemetry.enabled bool Can be used to enable or disable a trait. All traits share this common property. telemetry.auto bool Enables automatic configuration of the trait, including automatic discovery of the telemetry endpoint. telemetry.service-name string The name of the service that publishes telemetry data (defaults to the integration name) telemetry.endpoint string The target endpoint of the Telemetry service (automatically discovered by default) telemetry.sampler string The sampler of the telemetry used for tracing (default "on") telemetry.sampler-ratio string The sampler ratio of the telemetry used for tracing telemetry.sampler-parent-based bool The sampler of the telemetry used for tracing is parent based (default "true") 6.2.7.2. Examples To activate tracing to a deployed OTLP API Jaeger through discovery: USD kamel run -t telemetry.enable=true ... To define a specific deployed OTLP gRPC reciever: USD kamel run -t telemetry.enable=true -t telemetry.endpoint=http://instance-collector:4317 ... To define another sampler service name: USD kamel run -t telemetry.enable=true -t telemetry.service-name=tracer_myintegration ... To use a ratio sampler with a sampling ratio of 1 to every 1,000 : USD kamel run -t telemetry.enable=true -t telemetry.sampler=ratio -t telemetry.sampler-ratio=0.001 ... 6.2.8. Pod Trait The pod trait allows the customization of the Integration pods. It applies the PodSpecTemplate struct contained in the Integration .spec.podTemplate field, into the Integration deployment Pods template, using strategic merge patch. This is used to customize the container where Camel routes execute, by using the integration container name. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Note 1 : In the current implementation, template options override the configuration options defined by using CLI. For example: USD kamel run Integration.java --pod-template template.yaml --env TEST_VARIABLE=will_be_overriden --env ANOTHER_VARIABLE=Im_There The value from the template overwrites the TEST_VARIABLE environment variable, while ANOTHER_VARIABLE is unchanged. Note 2: Changes to the integration container entrypoint are not applied due to current trait execution order. 6.2.8.1. Configuration Trait properties are specified when running any integration by using the CLI. USD kamel run --trait pod.[key]=[value] integration.java The following configuration options are available. Property Type Description pod.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.2.8.2. Sidecar Containers Example with the following Integration, that reads files from a directory: Integration.groovy from('file:///var/log') .convertBodyTo(String.class) .setBody().simple('USD{body}: {{TEST_VARIABLE}} ') .log('USD{body}') In addition, the following Pod template adds a sidecar container to the Integration Pod, generating some data into the directory, and mounts it into the integration container. template.yaml containers: - name: integration env: - name: TEST_VARIABLE value: "hello from the template" volumeMounts: - name: var-logs mountPath: /var/log - name: sidecar image: busybox command: [ "/bin/sh" , "-c", "while true; do echo USD(date -u) 'Content from the sidecar container' > /var/log/file.txt; sleep 1;done" ] volumeMounts: - name: var-logs mountPath: /var/log volumes: - name: var-logs emptyDir: { } The Integration route logs the content of the file generated by the sidecar container. Example: USD kamel run Integration.java --pod-template template.yaml ... Condition "Ready" is "True" for Integration integration [1] 2021-04-30 07:40:03,136 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:02 UTC 2021 Content from the sidecar container [1] : hello from the template [1] 2021-04-30 07:40:04,140 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:03 UTC 2021 Content from the sidecar container [1] : hello from the template [1] 2021-04-30 07:40:05,142 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:04 UTC 2021 Content from the sidecar container [1] : hello from the template 6.2.8.3. Init Containers With this trait you are able to run initContainers. To run the initContainers, you must do the following. Include at least one container in the template spec. Provide the configuration for the default container, which is integration. Following is a simple example. template.yaml containers: - name: integration initContainers: - name: init image: busybox command: [ "/bin/sh" , "-c", "echo 'hello'!" ] The integration container is overwritten by the container running the route, and the initContainer runs before the route as expected. 6.2.9. Prometheus Trait The Prometheus trait configures a Prometheus-compatible endpoint. It also creates a PodMonitor resource, so that the endpoint can be scraped automatically, when using the Prometheus operator. The metrics are exposed using MicroProfile Metrics. Warning The creation of the PodMonitor resource requires the Prometheus Operator custom resource definition to be installed. You can set pod-monitor to false for the Prometheus trait to work without the Prometheus Operator. The Prometheus trait is disabled by default. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.9.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait prometheus.[key]=[value] --trait prometheus.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description prometheus.enabled bool Can be used to enable or disable a trait. All traits share this common property. prometheus.pod-monitor bool Whether a PodMonitor resource is created (default true ). prometheus.pod-monitor-labels []string The PodMonitor resource labels, applicable when pod-monitor is true . 6.2.10. Pdb Trait The PDB trait allows to configure the PodDisruptionBudget resource for the Integration pods. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.10.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait pdb.[key]=[value] --trait pdb.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description pdb.enabled bool Can be used to enable or disable a trait. All traits share this common property. pdb.min-available string The number of pods for the Integration that must still be available after an eviction. It can be either an absolute number or a percentage. Only one of min-available and max-unavailable can be specified. pdb.max-unavailable string The number of pods for the Integration that can be unavailable after an eviction. It can be either an absolute number or a percentage (default 1 if min-available is also not set). Only one of max-unavailable and min-available can be specified. 6.2.11. Pull Secret Trait The Pull Secret trait sets a pull secret on the pod, to allow Kubernetes to retrieve the container image from an external registry. The pull secret can be specified manually or, in case you've configured authentication for an external container registry on the IntegrationPlatform , the same secret is used to pull images. It's enabled by default whenever you configure authentication for an external container registry, so it assumes that external registries are private. If your registry does not need authentication for pulling images, you can disable this trait. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.2.11.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait pull-secret.[key]=[value] --trait pull-secret.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description pull-secret.enabled bool Can be used to enable or disable a trait. All traits share this common property. pull-secret.secret-name string The pull secret name to set on the Pod. If left empty this is automatically taken from the IntegrationPlatform registry configuration. pull-secret.image-puller-delegation bool When using a global operator with a shared platform, this enables delegation of the system:image-puller cluster role on the operator namespace to the integration service account. pull-secret.auto bool Automatically configures the platform registry secret on the pod if it is of type kubernetes.io/dockerconfigjson . 6.2.12. Route Trait The Route trait can be used to configure the creation of OpenShift routes for the integration. The certificate and key contents may be sourced either from the local filesystem or in a Openshift secret object. The user may use the parameters ending in -secret (example: tls-certificate-secret ) to reference a certificate stored in a secret . Parameters ending in -secret have higher priorities and in case the same route parameter is set, for example: tls-key-secret and tls-key , then tls-key-secret is used. The recommended approach to set the key and certificates is to use secrets to store their contents and use the following parameters to reference them: tls-certificate-secret , tls-key-secret , tls-ca-certificate-secret , tls-destination-ca-certificate-secret See the examples section at the end of this page to see the setup options. This trait is available in the following profiles: OpenShift . 6.2.12.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait route.[key]=[value] --trait route.[key2]=[value2] integration.java The following configuration options are available: Property Type Description route.enabled bool Can be used to enable or disable a trait. All traits share this common property. route.annotations map[string]string The annotations are added to route. This can be used to set route specific annotations. For annotations options see Route Specific Annotations . CLI usage example: -t "route.annotations.'haproxy.router.openshift.io/balance'=roundrobin route.host string To configure the host exposed by the route. route.tls-termination string The TLS termination type, like edge , passthrough or reencrypt . Refer to the OpenShift route documentation for additional information. route.tls-certificate string The TLS certificate contents. Refer to the OpenShift route documentation for additional information. route.tls-certificate-secret string The secret name and key reference to the TLS certificate. The format is "secret-name[/key-name]", the value represents the secret name, if there is only one key in the secret it will be read, otherwise you can set a key name separated with a "/". Refer to the OpenShift route documentation for additional information. route.tls-key string The TLS certificate key contents. Refer to the OpenShift route documentation for additional information. route.tls-key-secret string The secret name and key reference to the TLS certificate key. The format is "secret-name[/key-name]", the value represents the secret name, if there is only one key in the secret it will be read, otherwise you can set a key name separated with a "/". Refer to the OpenShift route documentation for additional information. route.tls-ca-certificate string The TLS CA certificate contents. Refer to the OpenShift route documentation for additional information. route.tls-ca-certificate-secret string The secret name and key reference to the TLS CA certificate. The format is "secret-name[/key-name]", the value represents the secret name, if there is only one key in the secret it will be read, otherwise you can set a key name separated with a "/". Refer to the OpenShift route documentation for additional information. route.tls-destination-ca-certificate string The destination CA certificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify. Refer to the OpenShift route documentation for additional information. route.tls-destination-ca-certificate-secret string The secret name and key reference to the destination CA certificate. The format is "secret-name[/key-name]", the value represents the secret name, if there is only one key in the secret it will be read, otherwise you can set a key name separated with a "/". Refer to the OpenShift route documentation for additional information. route.tls-insecure-edge-termination-policy string To configure how to deal with insecure traffic, e.g. Allow , Disable or Redirect traffic. Refer to the OpenShift route documentation for additional information. 6.2.12.2. Examples These examples uses secrets to store the certificates and keys to be referenced in the integrations. Read Openshift route documentation for detailed information about routes. The PlatformHttpServer.java is the integration example. As a requirement to run these examples, you should have a secret with a key and certificate. 6.2.12.2.1. Generate a self-signed certificate and create a secret openssl genrsa -out tls.key openssl req -new -key tls.key -out csr.csr -subj "/CN=my-server.com" openssl x509 -req -in csr.csr -signkey tls.key -out tls.crt oc create secret tls my-combined-certs --key=tls.key --cert=tls.crt 6.2.12.2.2. Making an HTTP request to the route For all examples, you can use the following curl command to make an HTTP request. It makes use of inline scripts to retrieve the openshift namespace and cluster base domain, if you are using a shell which doesn't support these inline scripts, you should replace the inline scripts with the values of your actual namespace and base domain. curl -k https://platform-http-server-`oc config view --minify -o 'jsonpath={..namespace}'`.`oc get dnses/cluster -ojsonpath='{.spec.baseDomain}'`/hello?name=Camel-K To add an edge route using secrets, use the parameters ending in -secret to set the secret name which contains the certificate. This route example trait references a secret named my-combined-certs which contains two keys named tls.key and tls.crt . kamel run --dev PlatformHttpServer.java -t route.tls-termination=edge -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key To add a passthrough route using secrets, the TLS is setup in the integration pod, the keys and certificates should be visible in the running integration pod, to achieve this we are using the --resource kamel parameter to mount the secret in the integration pod, then we use some camel quarkus parameters to reference these certificate files in the running pod, they start with -p quarkus.http.ssl.certificate . This route example trait references a secret named my-combined-certs which contains two keys named tls.key and tls.crt . kamel run --dev PlatformHttpServer.java --resource secret:my-combined-certs@/etc/ssl/my-combined-certs -p quarkus.http.ssl.certificate.file=/etc/ssl/my-combined-certs/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/my-combined-certs/tls.key -t route.tls-termination=passthrough -t container.port=8443 To add a reencrypt route using secrets, the TLS is setup in the integration pod, the keys and certificates should be visible in the running integration pod, to achieve this we are using the --resource kamel parameter to mount the secret in the integration pod, then we use some camel quarkus parameters to reference these certificate files in the running pod, they start with -p quarkus.http.ssl.certificate . This route example trait references a secret named my-combined-certs which contains two keys named tls.key and tls.crt . kamel run --dev PlatformHttpServer.java --resource secret:my-combined-certs@/etc/ssl/my-combined-certs -p quarkus.http.ssl.certificate.file=/etc/ssl/my-combined-certs/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/my-combined-certs/tls.key -t route.tls-termination=reencrypt -t route.tls-destination-ca-certificate-secret=my-combined-certs/tls.crt -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key -t container.port=8443 To add a reencrypt route using a specific certificate from a secret for the route and Openshift service serving certificates for the integration endpoint. This way the Openshift service serving certificates is set up only in the integration pod. The keys and certificates should be visible in the running integration pod, to achieve this we are using the --resource kamel parameter to mount the secret in the integration pod, then we use some camel quarkus parameters to reference these certificate files in the running pod, they start with -p quarkus.http.ssl.certificate . This route example trait references a secret named my-combined-certs which contains two keys named tls.key and tls.crt . kamel run --dev PlatformHttpServer.java --resource secret:cert-from-openshift@/etc/ssl/cert-from-openshift -p quarkus.http.ssl.certificate.file=/etc/ssl/cert-from-openshift/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/cert-from-openshift/tls.key -t route.tls-termination=reencrypt -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key -t container.port=8443 Then you should annotate the integration service to inject the Openshift service serving certificates oc annotate service platform-http-server service.beta.openshift.io/serving-cert-secret-name=cert-from-openshift To add an edge route using a certificate and a private key provided from your local filesystem. This example uses inline scripts to read the certificate and private key file contents, then remove all new line characters, (this is required to set the certificate as parameter's values), so the values are in a single line. kamel run PlatformHttpServer.java --dev -t route.tls-termination=edge -t route.tls-certificate="USD(cat tls.crt|awk 'NF {sub(/\r/, ""); printf "%s\\n",USD0;}')" -t route.tls-key="USD(cat tls.key|awk 'NF {sub(/\r/, ""); printf "%s\\n",USD0;}')" 6.2.13. Service Trait The Service trait exposes the integration with a Service resource so that it can be accessed by other applications (or integrations) in the same namespace. It's enabled by default if the integration depends on a Camel component that can expose a HTTP endpoint. This trait is available in the following profiles: Kubernetes, OpenShift . 6.2.13.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait service.[key]=[value] --trait service.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description service.enabled bool Can be used to enable or disable a trait. All traits share this common property. service.auto bool To automatically detect from the code if a Service needs to be created. service.node-port bool Enable Service to be exposed as NodePort (default false ). 6.3. Camel K platform traits 6.3.1. Builder Trait The builder trait is internally used to determine the best strategy to build and configure IntegrationKits. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The builder trait is a platform trait : disabling it may compromise the platform functionality. 6.3.1.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait builder.[key]=[value] --trait builder.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description builder.enabled bool Can be used to enable or disable a trait. All traits share this common property. builder.verbose bool Enable verbose logging on build components that support it (e.g., OpenShift build pod). Kaniko and Buildah are not supported. builder.properties []string A list of properties to be provided to the build task 6.3.2. Container Trait The Container trait can be used to configure properties of the container where the integration will run. It also provides configuration for Services associated to the container. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The container trait is a platform trait : disabling it may compromise the platform functionality. 6.3.2.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait container.[key]=[value] --trait container.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description container.enabled bool Can be used to enable or disable a trait. All traits share this common property. container.auto bool container.request-cpu string The minimum amount of CPU required. container.request-memory string The minimum amount of memory required. container.limit-cpu string The maximum amount of CPU required. container.limit-memory string The maximum amount of memory required. container.expose bool Can be used to enable/disable exposure via kubernetes Service. container.port int To configure a different port exposed by the container (default 8080 ). container.port-name string To configure a different port name for the port exposed by the container (default http ). container.service-port int To configure under which service port the container port is to be exposed (default 80 ). container.service-port-name string To configure under which service port name the container port is to be exposed (default http ). container.name string The main container name. It's named integration by default. container.image string The main container image container.probes-enabled bool ProbesEnabled enable/disable probes on the container (default false ) 6.3.3. Camel Trait The Camel trait can be used to configure versions of Apache Camel K runtime and related libraries, it cannot be disabled. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The camel trait is a platform trait : disabling it may compromise the platform functionality. 6.3.3.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait camel.[key]=[value] --trait camel.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description camel.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.3.4. Dependencies Trait The Dependencies trait is internally used to automatically add runtime dependencies based on the integration that the user wants to run. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The dependencies trait is a platform trait : disabling it may compromise the platform functionality. 6.3.4.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait dependencies.[key]=[value] Integration.java The following configuration options are available: Property Type Description dependencies.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.3.5. Deployer Trait The deployer trait can be used to explicitly select the kind of high level resource that will deploy the integration. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The deployer trait is a platform trait : disabling it may compromise the platform functionality. 6.3.5.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait deployer.[key]=[value] --trait deployer.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description deployer.enabled bool Can be used to enable or disable a trait. All traits share this common property. deployer.kind string Allows to explicitly select the desired deployment kind between deployment , cron-job or knative-service when creating the resources for running the integration. 6.3.6. Deployment Trait The Deployment trait is responsible for generating the Kubernetes deployment that will make sure the integration will run in the cluster. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The deployment trait is a platform trait : disabling it may compromise the platform functionality. 6.3.6.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait deployment.[key]=[value] Integration.java The following configuration options are available: Property Type Description deployment.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.3.7. Environment Trait The environment trait is used internally to inject standard environment variables in the integration container, such as NAMESPACE , POD_NAME and others. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The environment trait is a platform trait : disabling it may compromise the platform functionality. 6.3.7.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait environment.[key]=[value] --trait environment.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description environment.enabled bool Can be used to enable or disable a trait. All traits share this common property. environment.container-meta bool Enables injection of NAMESPACE and POD_NAME environment variables (default true ) 6.3.8. Error Handler Trait The error-handler is a platform trait used to inject Error Handler source into the integration runtime. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The error-handler trait is a platform trait : disabling it may compromise the platform functionality. 6.3.8.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait error-handler.[key]=[value] --trait error-handler.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description error-handler.enabled bool Can be used to enable or disable a trait. All traits share this common property. error-handler.ref string The error handler ref name provided or found in application properties 6.3.9. Jvm Trait The JVM trait is used to configure the JVM that runs the integration. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The jvm trait is a platform trait : disabling it may compromise the platform functionality. 6.3.9.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait jvm.[key]=[value] --trait jvm.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description jvm.enabled bool Can be used to enable or disable a trait. All traits share this common property. jvm.debug bool Activates remote debugging, so that a debugger can be attached to the JVM, e.g., using port-forwarding jvm.debug-suspend bool Suspends the target JVM immediately before the main class is loaded jvm.print-command bool Prints the command used the start the JVM in the container logs (default true ) jvm.debug-address string Transport address at which to listen for the newly launched JVM (default *:5005 ) jvm.options []string A list of JVM options jvm.classpath string Additional JVM classpath (use Linux classpath separator) 6.3.9.2. Examples Include an additional classpath to the Integration : USD kamel run -t jvm.classpath=/path/to/my-dependency.jar:/path/to/another-dependency.jar ... 6.3.10. Kamelets Trait The kamelets trait is a platform trait used to inject Kamelets into the integration runtime. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The kamelets trait is a platform trait : disabling it may compromise the platform functionality. 6.3.10.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait kamelets.[key]=[value] --trait kamelets.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description kamelets.enabled bool Can be used to enable or disable a trait. All traits share this common property. kamelets.auto bool Automatically inject all referenced Kamelets and their default configuration (enabled by default) kamelets.list string Comma separated list of Kamelet names to load into the current integration 6.3.11. NodeAffinity Trait The NodeAffinity trait enables you to constrain the nodes that the integration pods are eligible to schedule on, through the following paths: Based on labels on the node or with inter-pod affinity and anti-affinity. Based on labels on pods that are already running on the nodes. This trait is disabled by default. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . 6.3.11.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait affinity.[key]=[value] --trait affinity.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description affinity.enabled bool Can be used to enable or disable a trait. All traits share this common property. affinity.pod-affinity bool Always co-locates multiple replicas of the integration in the same node (default false ). affinity.pod-anti-affinity bool Never co-locates multiple replicas of the integration in the same node (default false ). affinity.node-affinity-labels []string Defines a set of nodes the integration pod(s) are eligible to be scheduled on, based on labels on the node. affinity.pod-affinity-labels []string Defines a set of pods (namely those matching the label selector, relative to the given namespace) that the integration pod(s) should be co-located with. affinity.pod-anti-affinity-labels []string Defines a set of pods (namely those matching the label selector, relative to the given namespace) that the integration pod(s) should not be co-located with. 6.3.11.2. Examples To schedule the integration pod(s) on a specific node using the built-in node label kubernetes.io/hostname : USD kamel run -t affinity.node-affinity-labels="kubernetes.io/hostname in(node-66-50.hosted.k8s.tld)" ... To schedule a single integration pod per node (using the Exists operator): USD kamel run -t affinity.pod-anti-affinity-labels="camel.apache.org/integration" ... To co-locate the integration pod(s) with other integration pod(s): USD kamel run -t affinity.pod-affinity-labels="camel.apache.org/integration in(it1, it2)" ... The *-labels options follow the requirements from Label selectors . They can be multi-valuated, then the requirements list is ANDed, e.g., to schedule a single integration pod per node AND not co-located with the Camel K operator pod(s): USD kamel run -t affinity.pod-anti-affinity-labels="camel.apache.org/integration" -t affinity.pod-anti-affinity-labels="camel.apache.org/component=operator" ... More information can be found in the official Kubernetes documentation about Assigning Pods to Nodes . 6.3.12. Openapi Trait The OpenAPI DSL trait is internally used to allow creating integrations from a OpenAPI specs. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The openapi trait is a platform trait : disabling it may compromise the platform functionality. 6.3.12.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait openapi.[key]=[value] Integration.java The following configuration options are available: Property Type Description openapi.enabled bool Can be used to enable or disable a trait. All traits share this common property. 6.3.13. Owner Trait The Owner trait ensures that all created resources belong to the integration being created and transfers annotations and labels on the integration onto these owned resources. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The owner trait is a platform trait : disabling it may compromise the platform functionality. 6.3.13.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait owner.[key]=[value] --trait owner.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description owner.enabled bool Can be used to enable or disable a trait. All traits share this common property. owner.target-annotations []string The set of annotations to be transferred owner.target-labels []string The set of labels to be transferred 6.3.14. Platform Trait The platform trait is a base trait that is used to assign an integration platform to an integration. In case the platform is missing, the trait is allowed to create a default platform. This feature is especially useful in contexts where there's no need to provide a custom configuration for the platform (e.g. on OpenShift the default settings work, since there's an embedded container image registry). This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The platform trait is a platform trait : disabling it may compromise the platform functionality. 6.3.14.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait platform.[key]=[value] --trait platform.[key2]=[value2] Integration.java The following configuration options are available: Property Type Description platform.enabled bool Can be used to enable or disable a trait. All traits share this common property. platform.create-default bool To create a default (empty) platform when the platform is missing. platform.global bool Indicates if the platform should be created globally in the case of global operator (default true). platform.auto bool To automatically detect from the environment if a default platform can be created (it will be created on OpenShift only). 6.3.15. Quarkus Trait The Quarkus trait activates the Quarkus runtime. It's enabled by default. Note Compiling to a native executable, i.e. when using package-type=native , is only supported for kamelets, as well as YAML integrations. It also requires at least 4GiB of memory, so the Pod running the native build, that is either the operator Pod, or the build Pod (depending on the build strategy configured for the platform), must have enough memory available. This trait is available in the following profiles: Kubernetes, Knative, OpenShift . Warning The quarkus trait is a platform trait : disabling it may compromise the platform functionality. 6.3.15.1. Configuration Trait properties can be specified when running any integration with the CLI: USD kamel run --trait quarkus.[key]=[value] --trait quarkus.[key2]=[value2] integration.java The following configuration options are available: Property Type Description quarkus.enabled bool Can be used to enable or disable a trait. All traits share this common property. quarkus.package-type []github.com/apache/camel-k/pkg/trait.quarkusPackageType The Quarkus package types, either fast-jar or native (default fast-jar ). In case both fast-jar and native are specified, two IntegrationKit resources are created, with the native kit having precedence over the fast-jar one once ready. The order influences the resolution of the current kit for the integration. The kit corresponding to the first package type will be assigned to the integration in case no existing kit that matches the integration exists. 6.3.15.2. Supported Camel Components Camel K only supports the Camel components that are available as Camel Quarkus Extensions out-of-the-box. 6.3.15.3. Examples 6.3.15.3.1. Automatic Rollout Deployment to Native Integration While the compilation to native executables produces integrations that start faster and consume less memory at runtime, the build process is resources intensive, and takes a longer time than the packaging to traditional Java applications. In order to combine the best of both worlds, it's possible to configure the Quarkus trait to run both traditional and native builds in parallel when running an integration, e.g.: USD kamel run -t quarkus.package-type=fast-jar -t quarkus.package-type=native ... The integration pod will run as soon as the fast-jar build completes, and a rollout deployment to the native image will be triggered, as soon as the native build completes, with no service interruption. | [
"kamel run --trait service.enabled=false my-integration.yaml",
"kamel run --trait health.[key]=[value] --trait health.[key2]=[value2] integration.java",
"kamel run --trait knative.[key]=[value] --trait knative.[key2]=[value2] integration.java",
"kamel run --trait knative-service.[key]=[value] --trait knative-service.[key2]=[value2] Integration.java",
"kamel run --trait logging.[key]=[value] --trait logging.[key2]=[value2] Integration.java",
"kamel run --trait master.[key]=[value] --trait master.[key2]=[value2] Integration.java",
"kamel run --trait mount.[key]=[value] --trait mount.[key2]=[value2] integration.java",
"kamel run --trait telemetry.[key]=[value] --trait telemetry.[key2]=[value2] integration.java",
"kamel run -t telemetry.enable=true",
"kamel run -t telemetry.enable=true -t telemetry.endpoint=http://instance-collector:4317",
"kamel run -t telemetry.enable=true -t telemetry.service-name=tracer_myintegration",
"kamel run -t telemetry.enable=true -t telemetry.sampler=ratio -t telemetry.sampler-ratio=0.001",
"kamel run Integration.java --pod-template template.yaml --env TEST_VARIABLE=will_be_overriden --env ANOTHER_VARIABLE=Im_There",
"kamel run --trait pod.[key]=[value] integration.java",
"from('file:///var/log') .convertBodyTo(String.class) .setBody().simple('USD{body}: {{TEST_VARIABLE}} ') .log('USD{body}')",
"containers: - name: integration env: - name: TEST_VARIABLE value: \"hello from the template\" volumeMounts: - name: var-logs mountPath: /var/log - name: sidecar image: busybox command: [ \"/bin/sh\" , \"-c\", \"while true; do echo USD(date -u) 'Content from the sidecar container' > /var/log/file.txt; sleep 1;done\" ] volumeMounts: - name: var-logs mountPath: /var/log volumes: - name: var-logs emptyDir: { }",
"kamel run Integration.java --pod-template template.yaml Condition \"Ready\" is \"True\" for Integration integration [1] 2021-04-30 07:40:03,136 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:02 UTC 2021 Content from the sidecar container [1] : hello from the template [1] 2021-04-30 07:40:04,140 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:03 UTC 2021 Content from the sidecar container [1] : hello from the template [1] 2021-04-30 07:40:05,142 INFO [route1] (Camel (camel-1) thread #0 - file:///var/log) Fri Apr 30 07:40:04 UTC 2021 Content from the sidecar container [1] : hello from the template",
"containers: - name: integration initContainers: - name: init image: busybox command: [ \"/bin/sh\" , \"-c\", \"echo 'hello'!\" ]",
"kamel run --trait prometheus.[key]=[value] --trait prometheus.[key2]=[value2] Integration.java",
"kamel run --trait pdb.[key]=[value] --trait pdb.[key2]=[value2] Integration.java",
"kamel run --trait pull-secret.[key]=[value] --trait pull-secret.[key2]=[value2] Integration.java",
"kamel run --trait route.[key]=[value] --trait route.[key2]=[value2] integration.java",
"openssl genrsa -out tls.key openssl req -new -key tls.key -out csr.csr -subj \"/CN=my-server.com\" openssl x509 -req -in csr.csr -signkey tls.key -out tls.crt create secret tls my-combined-certs --key=tls.key --cert=tls.crt",
"curl -k https://platform-http-server-`oc config view --minify -o 'jsonpath={..namespace}'`.`oc get dnses/cluster -ojsonpath='{.spec.baseDomain}'`/hello?name=Camel-K",
"kamel run --dev PlatformHttpServer.java -t route.tls-termination=edge -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key",
"kamel run --dev PlatformHttpServer.java --resource secret:my-combined-certs@/etc/ssl/my-combined-certs -p quarkus.http.ssl.certificate.file=/etc/ssl/my-combined-certs/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/my-combined-certs/tls.key -t route.tls-termination=passthrough -t container.port=8443",
"kamel run --dev PlatformHttpServer.java --resource secret:my-combined-certs@/etc/ssl/my-combined-certs -p quarkus.http.ssl.certificate.file=/etc/ssl/my-combined-certs/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/my-combined-certs/tls.key -t route.tls-termination=reencrypt -t route.tls-destination-ca-certificate-secret=my-combined-certs/tls.crt -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key -t container.port=8443",
"kamel run --dev PlatformHttpServer.java --resource secret:cert-from-openshift@/etc/ssl/cert-from-openshift -p quarkus.http.ssl.certificate.file=/etc/ssl/cert-from-openshift/tls.crt -p quarkus.http.ssl.certificate.key-file=/etc/ssl/cert-from-openshift/tls.key -t route.tls-termination=reencrypt -t route.tls-certificate-secret=my-combined-certs/tls.crt -t route.tls-key-secret=my-combined-certs/tls.key -t container.port=8443",
"annotate service platform-http-server service.beta.openshift.io/serving-cert-secret-name=cert-from-openshift",
"kamel run PlatformHttpServer.java --dev -t route.tls-termination=edge -t route.tls-certificate=\"USD(cat tls.crt|awk 'NF {sub(/\\r/, \"\"); printf \"%s\\\\n\",USD0;}')\" -t route.tls-key=\"USD(cat tls.key|awk 'NF {sub(/\\r/, \"\"); printf \"%s\\\\n\",USD0;}')\"",
"kamel run --trait service.[key]=[value] --trait service.[key2]=[value2] Integration.java",
"kamel run --trait builder.[key]=[value] --trait builder.[key2]=[value2] Integration.java",
"kamel run --trait container.[key]=[value] --trait container.[key2]=[value2] Integration.java",
"kamel run --trait camel.[key]=[value] --trait camel.[key2]=[value2] Integration.java",
"kamel run --trait dependencies.[key]=[value] Integration.java",
"kamel run --trait deployer.[key]=[value] --trait deployer.[key2]=[value2] Integration.java",
"kamel run --trait deployment.[key]=[value] Integration.java",
"kamel run --trait environment.[key]=[value] --trait environment.[key2]=[value2] Integration.java",
"kamel run --trait error-handler.[key]=[value] --trait error-handler.[key2]=[value2] Integration.java",
"kamel run --trait jvm.[key]=[value] --trait jvm.[key2]=[value2] Integration.java",
"kamel run -t jvm.classpath=/path/to/my-dependency.jar:/path/to/another-dependency.jar",
"kamel run --trait kamelets.[key]=[value] --trait kamelets.[key2]=[value2] Integration.java",
"kamel run --trait affinity.[key]=[value] --trait affinity.[key2]=[value2] Integration.java",
"kamel run -t affinity.node-affinity-labels=\"kubernetes.io/hostname in(node-66-50.hosted.k8s.tld)\"",
"kamel run -t affinity.pod-anti-affinity-labels=\"camel.apache.org/integration\"",
"kamel run -t affinity.pod-affinity-labels=\"camel.apache.org/integration in(it1, it2)\"",
"kamel run -t affinity.pod-anti-affinity-labels=\"camel.apache.org/integration\" -t affinity.pod-anti-affinity-labels=\"camel.apache.org/component=operator\"",
"kamel run --trait openapi.[key]=[value] Integration.java",
"kamel run --trait owner.[key]=[value] --trait owner.[key2]=[value2] Integration.java",
"kamel run --trait platform.[key]=[value] --trait platform.[key2]=[value2] Integration.java",
"kamel run --trait quarkus.[key]=[value] --trait quarkus.[key2]=[value2] integration.java",
"kamel run -t quarkus.package-type=fast-jar -t quarkus.package-type=native"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/developing_and_managing_integrations_using_camel_k/camel-k-traits-reference |
Chapter 10. Installation configuration parameters for IBM Cloud | Chapter 10. Installation configuration parameters for IBM Cloud Before you deploy an OpenShift Container Platform cluster on IBM Cloud(R), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 10.1. Available installation configuration parameters for IBM Cloud The following tables specify the required, optional, and IBM Cloud-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 10.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 10.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . If you are deploying the cluster to an existing Virtual Private Cloud (VPC), the CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Configures the IPv4 join subnet that is used internally by ovn-kubernetes . This subnet must not overlap with any other subnet that OpenShift Container Platform is using, including the node network. The size of the subnet must be larger than the number of nodes. You cannot change the value after installation. An IP network block in CIDR notation. The default value is 100.64.0.0/16 . 10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 10.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 10.1.4. Additional IBM Cloud configuration parameters Additional IBM Cloud(R) configuration parameters are described in the following table: Table 10.4. Additional IBM Cloud(R) parameters Parameter Description Values An IBM(R) Key Protect for IBM Cloud(R) (Key Protect) root key that should be used to encrypt the root (boot) volume of only control plane machines. The Cloud Resource Name (CRN) of the root key. The CRN must be enclosed in quotes (""). A Key Protect root key that should be used to encrypt the root (boot) volume of only compute machines. The CRN of the root key. The CRN must be enclosed in quotes (""). A Key Protect root key that should be used to encrypt the root (boot) volume of all of the cluster's machines. When specified as part of the default machine configuration, all managed storage classes are updated with this key. As such, data volumes that are provisioned after the installation are also encrypted using this key. The CRN of the root key. The CRN must be enclosed in quotes (""). The name of an existing resource group. By default, an installer-provisioned VPC and cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. If you are deploying the cluster into an existing VPC, the installer-provisioned cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. The VPC resources that you have provisioned must exist in a resource group that you specify using the networkResourceGroupName parameter. In either case, this resource group must only be used for a single cluster installation, as the cluster components assume ownership of all of the resources in the resource group. [ 1 ] String, for example existing_resource_group . A list of service endpoint names and URIs. By default, the installation program and cluster components use public service endpoints to access the required IBM Cloud(R) services. If network restrictions limit access to public service endpoints, you can specify an alternate service endpoint to override the default behavior. You can specify only one alternate service endpoint for each of the following services: Cloud Object Storage DNS Services Global Search Global Tagging Identity Services Key Protect Resource Controller Resource Manager VPC A valid service endpoint name and fully qualified URI. Valid names include: COS DNSServices GlobalServices GlobalTagging IAM KeyProtect ResourceController ResourceManager VPC The name of an existing resource group. This resource contains the existing VPC and subnets to which the cluster will be deployed. This parameter is required when deploying the cluster to a VPC that you have provisioned. String, for example existing_network_resource_group . The new dedicated host to create. If you specify a value for platform.ibmcloud.dedicatedHosts.name , this parameter is not required. Valid IBM Cloud(R) dedicated host profile, such as cx2-host-152x304 . [ 2 ] An existing dedicated host. If you specify a value for platform.ibmcloud.dedicatedHosts.profile , this parameter is not required. String, for example my-dedicated-host-name . The instance type for all IBM Cloud(R) machines. Valid IBM Cloud(R) instance type, such as bx2-8x32 . [ 2 ] The name of the existing VPC that you want to deploy your cluster to. String. The name(s) of the existing subnet(s) in your VPC that you want to deploy your control plane machines to. Specify a subnet for each availability zone. String array The name(s) of the existing subnet(s) in your VPC that you want to deploy your compute machines to. Specify a subnet for each availability zone. Subnet IDs are not supported. String array Whether you define an existing resource group, or if the installer creates one, determines how the resource group is treated when the cluster is uninstalled. If you define a resource group, the installer removes all of the installer-provisioned resources, but leaves the resource group alone; if a resource group is created as part of the installation, the installer removes all of the installer-provisioned resources and the resource group. To determine which profile best meets your needs, see Instance Profiles in the IBM(R) documentation. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"controlPlane: platform: ibmcloud: bootVolume: encryptionKey:",
"compute: platform: ibmcloud: bootVolume: encryptionKey:",
"platform: ibmcloud: defaultMachinePlatform: bootvolume: encryptionKey:",
"platform: ibmcloud: resourceGroupName:",
"platform: ibmcloud: serviceEndpoints: - name: url:",
"platform: ibmcloud: networkResourceGroupName:",
"platform: ibmcloud: dedicatedHosts: profile:",
"platform: ibmcloud: dedicatedHosts: name:",
"platform: ibmcloud: type:",
"platform: ibmcloud: vpcName:",
"platform: ibmcloud: controlPlaneSubnets:",
"platform: ibmcloud: computeSubnets:"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_cloud/installation-config-parameters-ibm-cloud-vpc |
Chapter 9. Clustering | Chapter 9. Clustering Support for IBM iPDU Fence Device Red Hat Enterprise Linux 6.4 adds support for the IBM iPDU fence device. For more information on the parameters of this fence device, refer to the Fence Device Parameters appendix in the Red Hat Enterprise Linux 6 Cluster Administration guide. Support for Eaton Network Power Controller Fence Device Red Hat Enterprise Linux 6.4 adds support for fence_eaton_snmp , the fence agent for the Eaton over SNMP network power switch. For more information on the parameters of this fence agent, refer to the Fence Device Parameters appendix in the Red Hat Enterprise Linux 6 Cluster Administration guide. New keepalived Package Red Hat Enterprise Linux 6.4 includes the keepalived package as a Technology Preview. The keepalived package provides simple and robust facilities for load-balancing and high-availability. The load-balancing framework relies on the well-known and widely used Linux Virtual Server kernel module providing Layer 4 network load-balancing. The keepalived daemon implements a set of health checkers for load-balanced server pools according to their state. The keepalived daemon also implements the Virtual Router Redundancy Protocol (VRRP), allowing router or director failover to achieve high availability. Watchdog Recovery New fence_sanlock and checkquorum.wdmd fence agents, included in Red Hat Enterprise Linux 6.4 as a Technology Preview, provide new mechanisms to trigger the recovery of a node via a watchdog device. Tutorials on how to enable this Technology Preview will be available at https://fedorahosted.org/cluster/wiki/HomePage . Support for VMDK-based Storage Red Hat Enterprise Linux 6.4 adds support for clusters utilizing VMware's VMDK (Virtual Machine Disk) disk image technology with the multi-writer option. This allows you, for example, to use VMDK-based storage with the multi-writer option for clustered file systems such as GFS2. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_release_notes/chap-clustering |
Chapter 8. Advanced managed cluster configuration with SiteConfig resources | Chapter 8. Advanced managed cluster configuration with SiteConfig resources You can use SiteConfig custom resources (CRs) to deploy custom functionality and configurations in your managed clusters at installation time. 8.1. Customizing extra installation manifests in the GitOps ZTP pipeline You can define a set of extra manifests for inclusion in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. These manifests are linked to the SiteConfig custom resources (CRs) and are applied to the cluster during installation. Including MachineConfig CRs at install time makes the installation process more efficient. Prerequisites Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create a set of extra manifest CRs that the GitOps ZTP pipeline uses to customize the cluster installs. In your custom /siteconfig directory, create a subdirectory /custom-manifest for your extra manifests. The following example illustrates a sample /siteconfig with /custom-manifest folder: siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml ├── extra-manifest/ └── custom-manifest └── 01-example-machine-config.yaml Note The subdirectory names /custom-manifest and /extra-manifest used throughout are example names only. There is no requirement to use these names and no restriction on how you name these subdirectories. In this example /extra-manifest refers to the Git subdirectory that stores the contents of /extra-manifest from the ztp-site-generate container. Add your custom extra manifest CRs to the siteconfig/custom-manifest directory. In your SiteConfig CR, enter the directory name in the extraManifests.searchPaths field, for example: clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2 1 Folder for manifests copied from the ztp-site-generate container. 2 Folder for custom manifests. Save the SiteConfig , /extra-manifest , and /custom-manifest CRs, and push them to the site configuration repo. During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the /custom-manifest directory to the default set of extra manifests stored in extra-manifest/ . Note As of version 4.14 extraManifestPath is subject to a deprecation warning. While extraManifestPath is still supported, we recommend that you use extraManifests.searchPaths . If you define extraManifests.searchPaths in the SiteConfig file, the GitOps ZTP pipeline does not fetch manifests from the ztp-site-generate container during site installation. If you define both extraManifestPath and extraManifests.searchPaths in the Siteconfig CR, the setting defined for extraManifests.searchPaths takes precedence. It is strongly recommended that you extract the contents of /extra-manifest from the ztp-site-generate container and push it to the GIT repository. 8.2. Filtering custom resources using SiteConfig filters By using filters, you can easily customize SiteConfig custom resources (CRs) to include or exclude other CRs for use in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. You can specify an inclusionDefault value of include or exclude for the SiteConfig CR, along with a list of the specific extraManifest RAN CRs that you want to include or exclude. Setting inclusionDefault to include makes the GitOps ZTP pipeline apply all the files in /source-crs/extra-manifest during installation. Setting inclusionDefault to exclude does the opposite. You can exclude individual CRs from the /source-crs/extra-manifest folder that are otherwise included by default. The following example configures a custom single-node OpenShift SiteConfig CR to exclude the /source-crs/extra-manifest/03-sctp-machine-config-worker.yaml CR at installation time. Some additional optional filtering scenarios are also described. Prerequisites You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure To prevent the GitOps ZTP pipeline from applying the 03-sctp-machine-config-worker.yaml CR file, apply the following YAML in the SiteConfig CR: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.16" sshPublicKey: "<ssh_public_key>" clusters: - clusterName: "site1-sno-du" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml The GitOps ZTP pipeline skips the 03-sctp-machine-config-worker.yaml CR during installation. All other CRs in /source-crs/extra-manifest are applied. Save the SiteConfig CR and push the changes to the site configuration repository. The GitOps ZTP pipeline monitors and adjusts what CRs it applies based on the SiteConfig filter instructions. Optional: To prevent the GitOps ZTP pipeline from applying all the /source-crs/extra-manifest CRs during cluster installation, apply the following YAML in the SiteConfig CR: - clusterName: "site1-sno-du" extraManifests: filter: inclusionDefault: exclude Optional: To exclude all the /source-crs/extra-manifest RAN CRs and instead include a custom CR file during installation, edit the custom SiteConfig CR to set the custom manifests folder and the include file, for example: clusters: - clusterName: "site1-sno-du" extraManifestPath: "<custom_manifest_folder>" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml 1 Replace <custom_manifest_folder> with the name of the folder that contains the custom installation CRs, for example, user-custom-manifest/ . 2 Set inclusionDefault to exclude to prevent the GitOps ZTP pipeline from applying the files in /source-crs/extra-manifest during installation. The following example illustrates the custom folder structure: siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml 8.3. Deleting a node by using the SiteConfig CR By using a SiteConfig custom resource (CR), you can delete and reprovision a node. This method is more efficient than manually deleting the node. Prerequisites You have configured the hub cluster to generate the required installation and policy CRs. You have created a Git repository in which you can manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as the source repository for the Argo CD application. Procedure Update the SiteConfig CR to include the bmac.agent-install.openshift.io/remove-agent-and-node-on-delete=true annotation and push the changes to the Git repository: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "cnfdf20" namespace: "cnfdf20" spec: clusters: nodes: - hostname: node6 role: "worker" crAnnotations: add: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true # ... Verify that the BareMetalHost object is annotated by running the following command: oc get bmh -n <managed-cluster-namespace> <bmh-object> -ojsonpath='{.metadata}' | jq -r '.annotations["bmac.agent-install.openshift.io/remove-agent-and-node-on-delete"]' Example output true Suppress the generation of the BareMetalHost CR by updating the SiteConfig CR to include the crSuppression.BareMetalHost annotation: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "cnfdf20" namespace: "cnfdf20" spec: clusters: - nodes: - hostName: node6 role: "worker" crSuppression: - BareMetalHost # ... Push the changes to the Git repository and wait for deprovisioning to start. The status of the BareMetalHost CR should change to deprovisioning . Wait for the BareMetalHost to finish deprovisioning, and be fully deleted. Verification Verify that the BareMetalHost and Agent CRs for the worker node have been deleted from the hub cluster by running the following commands: USD oc get bmh -n <cluster-ns> USD oc get agent -n <cluster-ns> Verify that the node record has been deleted from the spoke cluster by running the following command: USD oc get nodes Note If you are working with secrets, deleting a secret too early can cause an issue because ArgoCD needs the secret to complete resynchronization after deletion. Delete the secret only after the node cleanup, when the current ArgoCD synchronization is complete. steps To reprovision a node, delete the changes previously added to the SiteConfig , push the changes to the Git repository, and wait for the synchronization to complete. This regenerates the BareMetalHost CR of the worker node and triggers the re-install of the node. | [
"siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml ├── extra-manifest/ └── custom-manifest └── 01-example-machine-config.yaml",
"clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"site1-sno-du\" namespace: \"site1-sno-du\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.16\" sshPublicKey: \"<ssh_public_key>\" clusters: - clusterName: \"site1-sno-du\" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml",
"- clusterName: \"site1-sno-du\" extraManifests: filter: inclusionDefault: exclude",
"clusters: - clusterName: \"site1-sno-du\" extraManifestPath: \"<custom_manifest_folder>\" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml",
"siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"cnfdf20\" namespace: \"cnfdf20\" spec: clusters: nodes: - hostname: node6 role: \"worker\" crAnnotations: add: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true",
"get bmh -n <managed-cluster-namespace> <bmh-object> -ojsonpath='{.metadata}' | jq -r '.annotations[\"bmac.agent-install.openshift.io/remove-agent-and-node-on-delete\"]'",
"true",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"cnfdf20\" namespace: \"cnfdf20\" spec: clusters: - nodes: - hostName: node6 role: \"worker\" crSuppression: - BareMetalHost",
"oc get bmh -n <cluster-ns>",
"oc get agent -n <cluster-ns>",
"oc get nodes"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/edge_computing/ztp-advanced-install-ztp |
Chapter 6. Connecting applications to services | Chapter 6. Connecting applications to services 6.1. Release notes for Service Binding Operator The Service Binding Operator consists of a controller and an accompanying custom resource definition (CRD) for service binding. It manages the data plane for workloads and backing services. The Service Binding Controller reads the data made available by the control plane of backing services. Then, it projects this data to workloads according to the rules specified through the ServiceBinding resource. With Service Binding Operator, you can: Bind your workloads together with Operator-managed backing services. Automate configuration of binding data. Provide service operators a low-touch administrative experience to provision and manage access to services. Enrich development lifecycle with a consistent and declarative service binding method that eliminates discrepancies in cluster environments. The custom resource definition (CRD) of the Service Binding Operator supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API) with the servicebinding.io API group. 6.1.1. Support matrix Some features in the following table are in Technology Preview . These experimental features are not intended for production use. In the table, features are marked with the following statuses: TP : Technology Preview GA : General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 6.1. Support matrix Service Binding Operator API Group and Support Status OpenShift Versions Version binding.operators.coreos.com servicebinding.io 1.3.3 GA GA 4.9-4.12 1.3.1 GA GA 4.9-4.11 1.3 GA GA 4.9-4.11 1.2 GA GA 4.7-4.11 1.1.1 GA TP 4.7-4.10 1.1 GA TP 4.7-4.10 1.0.1 GA TP 4.7-4.9 1.0 GA TP 4.7-4.9 6.1.2. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see Red Hat CTO Chris Wright's message . 6.1.3. Release notes for Service Binding Operator 1.3.3 Service Binding Operator 1.3.3 is now available on OpenShift Container Platform 4.9, 4.10, 4.11 and 4.12. 6.1.3.1. Fixed issues Before this update, a security vulnerability CVE-2022-41717 was noted for Service Binding Operator. This update fixes the CVE-2022-41717 error and updates the golang.org/x/net package from v0.0.0-20220906165146-f3363e06e74c to v0.4.0. APPSVC-1256 Before this update, Provisioned Services were only detected if the respective resource had the "servicebinding.io/provisioned-service: true" annotation set while other Provisioned Services were missed. With this update, the detection mechanism identifies all Provisioned Services correctly based on the "status.binding.name" attribute. APPSVC-1204 6.1.4. Release notes for Service Binding Operator 1.3.1 Service Binding Operator 1.3.1 is now available on OpenShift Container Platform 4.9, 4.10, and 4.11. 6.1.4.1. Fixed issues Before this update, a security vulnerability CVE-2022-32149 was noted for Service Binding Operator. This update fixes the CVE-2022-32149 error and updates the golang.org/x/text package from v0.3.7 to v0.3.8. APPSVC-1220 6.1.5. Release notes for Service Binding Operator 1.3 Service Binding Operator 1.3 is now available on OpenShift Container Platform 4.9, 4.10, and 4.11. 6.1.5.1. Removed functionality In Service Binding Operator 1.3, the Operator Lifecycle Manager (OLM) descriptor feature has been removed to improve resource utilization. As an alternative to OLM descriptors, you can use CRD annotations to declare binding data. 6.1.6. Release notes for Service Binding Operator 1.2 Service Binding Operator 1.2 is now available on OpenShift Container Platform 4.7, 4.8, 4.9, 4.10, and 4.11. 6.1.6.1. New features This section highlights what is new in Service Binding Operator 1.2: Enable Service Binding Operator to consider optional fields in the annotations by setting the optional flag value to true . Support for servicebinding.io/v1beta1 resources. Improvements to the discoverability of bindable services by exposing the relevant binding secret without requiring a workload to be present. 6.1.6.2. Known issues Currently, when you install Service Binding Operator on OpenShift Container Platform 4.11, the memory footprint of Service Binding Operator increases beyond expected limits. With low usage, however, the memory footprint stays within the expected ranges of your environment or scenarios. In comparison with OpenShift Container Platform 4.10, under stress, both the average and maximum memory footprint increase considerably. This issue is evident in the versions of Service Binding Operator as well. There is currently no workaround for this issue. APPSVC-1200 By default, the projected files get their permissions set to 0644. Service Binding Operator cannot set specific permissions due to a bug in Kubernetes that causes issues if the service expects specific permissions such as, 0600 . As a workaround, you can modify the code of the program or the application that is running inside a workload resource to copy the file to the /tmp directory and set the appropriate permissions. APPSVC-1127 There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example: Example error message `postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"` Workaround 1: Install the Service Binding Operator in the all namespaces installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds. Workaround 2: If you cannot install the Service Binding Operator in the all namespaces installation mode, install the following role binding into the namespace where the Service Binding Operator is installed: Example: Role binding for Crunchy Postgres Operator kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role APPSVC-1062 According to the specification, when you change the ClusterWorkloadResourceMapping resources, Service Binding Operator must use the version of the ClusterWorkloadResourceMapping resource to remove the binding data that was being projected until now. Currently, when you change the ClusterWorkloadResourceMapping resources, the Service Binding Operator uses the latest version of the ClusterWorkloadResourceMapping resource to remove the binding data. As a result, {the servicebinding-title} might remove the binding data incorrectly. As a workaround, perform the following steps: Delete any ServiceBinding resources that use the corresponding ClusterWorkloadResourceMapping resource. Modify the ClusterWorkloadResourceMapping resource. Re-apply the ServiceBinding resources that you previously removed in step 1. APPSVC-1102 6.1.7. Release notes for Service Binding Operator 1.1.1 Service Binding Operator 1.1.1 is now available on OpenShift Container Platform 4.7, 4.8, 4.9, and 4.10. 6.1.7.1. Fixed issues Before this update, a security vulnerability CVE-2021-38561 was noted for Service Binding Operator Helm chart. This update fixes the CVE-2021-38561 error and updates the golang.org/x/text package from v0.3.6 to v0.3.7. APPSVC-1124 Before this update, users of the Developer Sandbox did not have sufficient permissions to read ClusterWorkloadResourceMapping resources. As a result, Service Binding Operator prevented all service bindings from being successful. With this update, the Service Binding Operator now includes the appropriate role-based access control (RBAC) rules for any authenticated subject including the Developer Sandbox users. These RBAC rules allow the Service Binding Operator to get , list , and watch the ClusterWorkloadResourceMapping resources for the Developer Sandbox users and to process service bindings successfully. APPSVC-1135 6.1.7.2. Known issues There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example: Example error message `postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"` Workaround 1: Install the Service Binding Operator in the all namespaces installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds. Workaround 2: If you cannot install the Service Binding Operator in the all namespaces installation mode, install the following role binding into the namespace where the Service Binding Operator is installed: Example: Role binding for Crunchy Postgres Operator kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role APPSVC-1062 Currently, when you modify the ClusterWorkloadResourceMapping resources, the Service Binding Operator does not implement correct behavior. As a workaround, perform the following steps: Delete any ServiceBinding resources that use the corresponding ClusterWorkloadResourceMapping resource. Modify the ClusterWorkloadResourceMapping resource. Re-apply the ServiceBinding resources that you previously removed in step 1. APPSVC-1102 6.1.8. Release notes for Service Binding Operator 1.1 Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8, 4.9, and 4.10. 6.1.8.1. New features This section highlights what is new in Service Binding Operator 1.1: Service Binding Options Workload resource mapping: Define exactly where binding data needs to be projected for the secondary workloads. Bind new workloads using a label selector. 6.1.8.2. Fixed issues Before this update, service bindings that used label selectors to pick up workloads did not project service binding data into the new workloads that matched the given label selectors. As a result, the Service Binding Operator could not periodically bind such new workloads. With this update, service bindings now project service binding data into the new workloads that match the given label selector. The Service Binding Operator now periodically attempts to find and bind such new workloads. APPSVC-1083 6.1.8.3. Known issues There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example: Example error message `postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"` Workaround 1: Install the Service Binding Operator in the all namespaces installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds. Workaround 2: If you cannot install the Service Binding Operator in the all namespaces installation mode, install the following role binding into the namespace where the Service Binding Operator is installed: Example: Role binding for Crunchy Postgres Operator kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role APPSVC-1062 Currently, when you modify the ClusterWorkloadResourceMapping resources, the Service Binding Operator does not implement correct behavior. As a workaround, perform the following steps: Delete any ServiceBinding resources that use the corresponding ClusterWorkloadResourceMapping resource. Modify the ClusterWorkloadResourceMapping resource. Re-apply the ServiceBinding resources that you previously removed in step 1. APPSVC-1102 6.1.9. Release notes for Service Binding Operator 1.0.1 Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8 and 4.9. Service Binding Operator 1.0.1 supports OpenShift Container Platform 4.9 and later running on: IBM Power Systems IBM Z and LinuxONE The custom resource definition (CRD) of the Service Binding Operator 1.0.1 supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API Tech Preview) with the servicebinding.io API group. Important Service Binding (Spec API Tech Preview) with the servicebinding.io API group is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1.9.1. Support matrix Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Technology Preview Features Support Scope In the table below, features are marked with the following statuses: TP : Technology Preview GA : General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 6.2. Support matrix Feature Service Binding Operator 1.0.1 binding.operators.coreos.com API group GA servicebinding.io API group TP 6.1.9.2. Fixed issues Before this update, binding the data values from a Cluster custom resource (CR) of the postgresql.k8s.enterpriesedb.io/v1 API collected the host binding value from the .metadata.name field of the CR. The collected binding value is an incorrect hostname and the correct hostname is available at the .status.writeService field. With this update, the annotations that the Service Binding Operator uses to expose the binding data values from the backing service CR are now modified to collect the host binding value from the .status.writeService field. The Service Binding Operator uses these modified annotations to project the correct hostname in the host and provider bindings. APPSVC-1040 Before this update, when you would bind a PostgresCluster CR of the postgres-operator.crunchydata.com/v1beta1 API, the binding data values did not include the values for the database certificates. As a result, the application failed to connect to the database. With this update, modifications to the annotations that the Service Binding Operator uses to expose the binding data from the backing service CR now include the database certificates. The Service Binding Operator uses these modified annotations to project the correct ca.crt , tls.crt , and tls.key certificate files. APPSVC-1045 Before this update, when you would bind a PerconaXtraDBCluster custom resource (CR) of the pxc.percona.com API, the binding data values did not include the port and database values. These binding values along with the others already projected are necessary for an application to successfully connect to the database service. With this update, the annotations that the Service Binding Operator uses to expose the binding data values from the backing service CR are now modified to project the additional port and database binding values. The Service Binding Operator uses these modified annotations to project the complete set of binding values that the application can use to successfully connect to the database service. APPSVC-1073 6.1.9.3. Known issues Currently, when you install the Service Binding Operator in the single namespace installation mode, the absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. In addition, the following error message is generated: Example error message Workaround 1: Install the Service Binding Operator in the all namespaces installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds. Workaround 2: If you cannot install the Service Binding Operator in the all namespaces installation mode, install the following role binding into the namespace where the Service Binding Operator is installed: Example: Role binding for Crunchy Postgres Operator kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role APPSVC-1062 6.1.10. Release notes for Service Binding Operator 1.0 Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8 and 4.9. The custom resource definition (CRD) of the Service Binding Operator 1.0 supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API Tech Preview) with the servicebinding.io API group. Important Service Binding (Spec API Tech Preview) with the servicebinding.io API group is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1.10.1. Support matrix Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Technology Preview Features Support Scope In the table below, features are marked with the following statuses: TP : Technology Preview GA : General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 6.3. Support matrix Feature Service Binding Operator 1.0 binding.operators.coreos.com API group GA servicebinding.io API group TP 6.1.10.2. New features Service Binding Operator 1.0 supports OpenShift Container Platform 4.9 and later running on: IBM Power Systems IBM Z and LinuxONE This section highlights what is new in Service Binding Operator 1.0: Exposal of binding data from services Based on annotations present in CRD, custom resources (CRs), or resources. Based on descriptors present in Operator Lifecycle Manager (OLM) descriptors. Support for provisioned services Workload projection Projection of binding data as files, with volume mounts. Projection of binding data as environment variables. Service Binding Options Bind backing services in a namespace that is different from the workload namespace. Project binding data into the specific container workloads. Auto-detection of the binding data from resources owned by the backing service CR. Compose custom binding data from the exposed binding data. Support for non- PodSpec compliant workload resources. Security Support for role-based access control (RBAC). 6.1.11. Additional resources Understanding Service Binding Operator . 6.2. Understanding Service Binding Operator Application developers need access to backing services to build and connect workloads. Connecting workloads to backing services is always a challenge because each service provider suggests a different way to access their secrets and consume them in a workload. In addition, manual configuration and maintenance of this binding together of workloads and backing services make the process tedious, inefficient, and error-prone. The Service Binding Operator enables application developers to easily bind workloads together with Operator-managed backing services, without any manual procedures to configure the binding connection. 6.2.1. Service Binding terminology This section summarizes the basic terms used in Service Binding. Service binding The representation of the action of providing information about a service to a workload. Examples include establishing the exchange of credentials between a Java application and a database that it requires. Backing service Any service or software that the application consumes over the network as part of its normal operation. Examples include a database, a message broker, an application with REST endpoints, an event stream, an Application Performance Monitor (APM), or a Hardware Security Module (HSM). Workload (application) Any process running within a container. Examples include a Spring Boot application, a NodeJS Express application, or a Ruby on Rails application. Binding data Information about a service that you use to configure the behavior of other resources within the cluster. Examples include credentials, connection details, volume mounts, or secrets. Binding connection Any connection that establishes an interaction between the connected components, such as a bindable backing service and an application requiring that backing service. 6.2.2. About Service Binding Operator The Service Binding Operator consists of a controller and an accompanying custom resource definition (CRD) for service binding. It manages the data plane for workloads and backing services. The Service Binding Controller reads the data made available by the control plane of backing services. Then, it projects this data to workloads according to the rules specified through the ServiceBinding resource. As a result, the Service Binding Operator enables workloads to use backing services or external services by automatically collecting and sharing binding data with the workloads. The process involves making the backing service bindable and binding the workload and the service together. 6.2.2.1. Making an Operator-managed backing service bindable To make a service bindable, as an Operator provider, you need to expose the binding data required by workloads to bind with the services provided by the Operator. You can provide the binding data either as annotations or as descriptors in the CRD of the Operator that manages the backing service. 6.2.2.2. Binding a workload together with a backing service By using the Service Binding Operator, as an application developer, you need to declare the intent of establishing a binding connection. You must create a ServiceBinding CR that references the backing service. This action triggers the Service Binding Operator to project the exposed binding data into the workload. The Service Binding Operator receives the declared intent and binds the workload together with the backing service. The CRD of the Service Binding Operator supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API) with the servicebinding.io API group. With Service Binding Operator, you can: Bind your workloads to Operator-managed backing services. Automate configuration of binding data. Provide service operators with a low-touch administrative experience to provision and manage access to services. Enrich the development lifecycle with a consistent and declarative service binding method that eliminates discrepancies in cluster environments. 6.2.3. Key features Exposal of binding data from services Based on annotations present in CRD, custom resources (CRs), or resources. Workload projection Projection of binding data as files, with volume mounts. Projection of binding data as environment variables. Service Binding Options Bind backing services in a namespace that is different from the workload namespace. Project binding data into the specific container workloads. Auto-detection of the binding data from resources owned by the backing service CR. Compose custom binding data from the exposed binding data. Support for non- PodSpec compliant workload resources. Security Support for role-based access control (RBAC). 6.2.4. API differences The CRD of the Service Binding Operator supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API) with the servicebinding.io API group. Both of these API groups have similar features, but they are not completely identical. Here is the complete list of differences between these API groups: Feature Supported by the binding.operators.coreos.com API group Supported by the servicebinding.io API group Notes Binding to provisioned services Yes Yes Not applicable (N/A) Direct secret projection Yes Yes Not applicable (N/A) Bind as files Yes Yes Default behavior for the service bindings of the servicebinding.io API group Opt-in functionality for the service bindings of the binding.operators.coreos.com API group Bind as environment variables Yes Yes Default behavior for the service bindings of the binding.operators.coreos.com API group. Opt-in functionality for the service bindings of the servicebinding.io API group: Environment variables are created alongside files. Selecting workload with a label selector Yes Yes Not applicable (N/A) Detecting binding resources ( .spec.detectBindingResources ) Yes No The servicebinding.io API group has no equivalent feature. Naming strategies Yes No There is no current mechanism within the servicebinding.io API group to interpret the templates that naming strategies use. Container path Yes Partial Because a service binding of the binding.operators.coreos.com API group can specify mapping behavior within the ServiceBinding resource, the servicebinding.io API group cannot fully support an equivalent behavior without more information about the workload. Container name filtering No Yes The binding.operators.coreos.com API group has no equivalent feature. Secret path Yes No The servicebinding.io API group has no equivalent feature. Alternative binding sources (for example, binding data from annotations) Yes Allowed by Service Binding Operator The specification requires support for getting binding data from provisioned services and secrets. However, a strict reading of the specification suggests that support for other binding data sources is allowed. Using this fact, Service Binding Operator can pull the binding data from various sources (for example, pulling binding data from annotations). Service Binding Operator supports these sources on both the API groups. 6.2.5. Additional resources Getting started with service binding . 6.3. Installing Service Binding Operator This guide walks cluster administrators through the process of installing the Service Binding Operator to an OpenShift Container Platform cluster. You can install Service Binding Operator on OpenShift Container Platform 4.7 and later. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually. 6.3.1. Installing the Service Binding Operator using the web console You can install Service Binding Operator using the OpenShift Container Platform OperatorHub. When you install the Service Binding Operator, the custom resources (CRs) required for the service binding configuration are automatically installed along with the Operator. Procedure In the Administrator perspective of the web console, navigate to Operators OperatorHub . Use the Filter by keyword box to search for Service Binding Operator in the catalog. Click the Service Binding Operator tile. Read the brief description about the Operator on the Service Binding Operator page. Click Install . On the Install Operator page: Select All namespaces on the cluster (default) for the Installation Mode . This mode installs the Operator in the default openshift-operators namespace, which enables the Operator to watch and be made available to all namespaces in the cluster. Select Automatic for the Approval Strategy . This ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager (OLM). If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version. Select an Update Channel . By default, the stable channel enables installation of the latest stable and supported release of the Service Binding Operator. Click Install . Note The Operator is installed automatically into the openshift-operators namespace. On the Installed Operator - ready for use pane, click View Operator . You will see the Operator listed on the Installed Operators page. Verify that the Status is set to Succeeded to confirm successful installation of Service Binding Operator. 6.3.2. Additional Resources Getting started with service binding . 6.4. Getting started with service binding The Service Binding Operator manages the data plane for workloads and backing services. This guide provides instructions with examples to help you create a database instance, deploy an application, and use the Service Binding Operator to create a binding connection between the application and the database service. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc CLI. You have installed Service Binding Operator from OperatorHub. You have installed the 5.1.2 version of the Crunchy Postgres for Kubernetes Operator from OperatorHub using the v5 Update channel. The installed Operator is available in an appropriate namespace, such as the my-petclinic namespace. Note You can create the namespace using the oc create namespace my-petclinic command. 6.4.1. Creating a PostgreSQL database instance To create a PostgreSQL database instance, you must create a PostgresCluster custom resource (CR) and configure the database. Procedure Create the PostgresCluster CR in the my-petclinic namespace by running the following command in shell: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.4-0 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi EOD The annotations added in this PostgresCluster CR enable the service binding connection and trigger the Operator reconciliation. The output verifies that the database instance is created: Example output postgrescluster.postgres-operator.crunchydata.com/hippo created After you have created the database instance, ensure that all the pods in the my-petclinic namespace are running: USD oc get pods -n my-petclinic The output, which takes a few minutes to display, verifies that the database is created and configured: Example output NAME READY STATUS RESTARTS AGE hippo-backup-9rxm-88rzq 0/1 Completed 0 2m2s hippo-instance1-6psd-0 4/4 Running 0 3m28s hippo-repo-host-0 2/2 Running 0 3m28s After the database is configured, you can deploy the sample application and connect it to the database service. 6.4.2. Deploying the Spring PetClinic sample application To deploy the Spring PetClinic sample application on an OpenShift Container Platform cluster, you must use a deployment configuration and configure your local environment to be able to test the application. Procedure Deploy the spring-petclinic application with the PostgresCluster custom resource (CR) by running the following command in shell: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD The output verifies that the Spring PetClinic sample application is created and deployed: Example output deployment.apps/spring-petclinic created service/spring-petclinic created Note If you are deploying the application using Container images in the Developer perspective of the web console, you must enter the following environment variables under the Deployment section of the Advanced options : Name: SPRING_PROFILES_ACTIVE Value: postgres Verify that the application is not yet connected to the database service by running the following command: USD oc get pods -n my-petclinic The output takes a few minutes to display the CrashLoopBackOff status: Example output NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s At this stage, the pod fails to start. If you try to interact with the application, it returns errors. Expose the service to create a route for your application: USD oc expose service spring-petclinic -n my-petclinic The output verifies that the spring-petclinic service is exposed and a route for the Spring PetClinic sample application is created: Example output route.route.openshift.io/spring-petclinic exposed You can now use the Service Binding Operator to connect the application to the database service. 6.4.3. Connecting the Spring PetClinic sample application to the PostgreSQL database service To connect the sample application to the database service, you must create a ServiceBinding custom resource (CR) that triggers the Service Binding Operator to project the binding data into the application. Procedure Create a ServiceBinding CR to project the binding data: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster 2 name: hippo application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD 1 Specifies a list of service resources. 2 The CR of the database. 3 The sample application that points to a Deployment or any other similar resource with an embedded PodSpec. The output verifies that the ServiceBinding CR is created to project the binding data into the sample application. Example output servicebinding.binding.operators.coreos.com/spring-petclinic created Verify that the request for service binding is successful: USD oc get servicebindings -n my-petclinic Example output NAME READY REASON AGE spring-petclinic-pgcluster True ApplicationsBound 7s By default, the values from the binding data of the database service are projected as files into the workload container that runs the sample application. For example, all the values from the Secret resource are projected into the bindings/spring-petclinic-pgcluster directory. Note Optionally, you can also verify that the files in the application contain the projected binding data, by printing out the directory contents: USD for i in username password host port type; do oc exec -it deploy/spring-petclinic -n my-petclinic -- /bin/bash -c 'cd /tmp; find /bindings/*/'USDi' -exec echo -n {}:" " \; -exec cat {} \;'; echo; done Example output: With all the values from the secret resource /bindings/spring-petclinic-pgcluster/username: <username> /bindings/spring-petclinic-pgcluster/password: <password> /bindings/spring-petclinic-pgcluster/host: hippo-primary.my-petclinic.svc /bindings/spring-petclinic-pgcluster/port: 5432 /bindings/spring-petclinic-pgcluster/type: postgresql Set up the port forwarding from the application port to access the sample application from your local environment: USD oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic Example output Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080 Access http://localhost:8080/petclinic . You can now remotely access the Spring PetClinic sample application at localhost:8080 and see that the application is now connected to the database service. 6.4.4. Additional Resources Installing Service Binding Operator . Creating applications using the Developer perspective . Managing resources from custom resource definitions . Known bindable Operators . 6.5. Getting started with service binding on IBM Power Systems, IBM Z, and LinuxONE The Service Binding Operator manages the data plane for workloads and backing services. This guide provides instructions with examples to help you create a database instance, deploy an application, and use the Service Binding Operator to create a binding connection between the application and the database service. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc CLI. You have installed the Service Binding Operator from OperatorHub. 6.5.1. Deploying a PostgreSQL Operator Procedure To deploy the Dev4Devs PostgreSQL Operator in the my-petclinic namespace run the following command in shell: USD oc apply -f - << EOD --- apiVersion: v1 kind: Namespace metadata: name: my-petclinic --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: postgres-operator-group namespace: my-petclinic --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-multiarch-catalog namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/ibm/operator-registry-<architecture> 1 imagePullPolicy: IfNotPresent displayName: ibm-multiarch-catalog updateStrategy: registryPoll: interval: 30m --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: postgresql-operator-dev4devs-com namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: postgresql-operator-dev4devs-com source: ibm-multiarch-catalog sourceNamespace: openshift-marketplace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: database-view labels: servicebinding.io/controller: "true" rules: - apiGroups: - postgresql.dev4devs.com resources: - databases verbs: - get - list EOD 1 The Operator image. For IBM Power: quay.io/ibm/operator-registry-ppc64le:release-4.9 For IBM Z and LinuxONE: quay.io/ibm/operator-registry-s390x:release-4.8 Verification After the operator is installed, list the operator subscriptions in the openshift-operators namespace: USD oc get subs -n openshift-operators Example output NAME PACKAGE SOURCE CHANNEL postgresql-operator-dev4devs-com postgresql-operator-dev4devs-com ibm-multiarch-catalog alpha rh-service-binding-operator rh-service-binding-operator redhat-operators stable 6.5.2. Creating a PostgreSQL database instance To create a PostgreSQL database instance, you must create a Database custom resource (CR) and configure the database. Procedure Create the Database CR in the my-petclinic namespace by running the following command in shell: USD oc apply -f - << EOD apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: sampledatabase namespace: my-petclinic annotations: host: sampledatabase type: postgresql port: "5432" service.binding/database: 'path={.spec.databaseName}' service.binding/port: 'path={.metadata.annotations.port}' service.binding/password: 'path={.spec.databasePassword}' service.binding/username: 'path={.spec.databaseUser}' service.binding/type: 'path={.metadata.annotations.type}' service.binding/host: 'path={.metadata.annotations.host}' spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: "sampledb" databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: "samplepwd" databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: "sampleuser" databaseUserKeyEnvVar: POSTGRESQL_USER image: registry.redhat.io/rhel8/postgresql-13:latest databaseStorageClassName: nfs-storage-provisioner size: 1 EOD The annotations added in this Database CR enable the service binding connection and trigger the Operator reconciliation. The output verifies that the database instance is created: Example output database.postgresql.dev4devs.com/sampledatabase created After you have created the database instance, ensure that all the pods in the my-petclinic namespace are running: USD oc get pods -n my-petclinic The output, which takes a few minutes to display, verifies that the database is created and configured: Example output NAME READY STATUS RESTARTS AGE sampledatabase-cbc655488-74kss 0/1 Running 0 32s After the database is configured, you can deploy the sample application and connect it to the database service. 6.5.3. Deploying the Spring PetClinic sample application To deploy the Spring PetClinic sample application on an OpenShift Container Platform cluster, you must use a deployment configuration and configure your local environment to be able to test the application. Procedure Deploy the spring-petclinic application with the PostgresCluster custom resource (CR) by running the following command in shell: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres - name: org.springframework.cloud.bindings.boot.enable value: "true" ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD The output verifies that the Spring PetClinic sample application is created and deployed: Example output deployment.apps/spring-petclinic created service/spring-petclinic created Note If you are deploying the application using Container images in the Developer perspective of the web console, you must enter the following environment variables under the Deployment section of the Advanced options : Name: SPRING_PROFILES_ACTIVE Value: postgres Verify that the application is not yet connected to the database service by running the following command: USD oc get pods -n my-petclinic It takes take a few minutes until the CrashLoopBackOff status is displayed: Example output NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s At this stage, the pod fails to start. If you try to interact with the application, it returns errors. You can now use the Service Binding Operator to connect the application to the database service. 6.5.4. Connecting the Spring PetClinic sample application to the PostgreSQL database service To connect the sample application to the database service, you must create a ServiceBinding custom resource (CR) that triggers the Service Binding Operator to project the binding data into the application. Procedure Create a ServiceBinding CR to project the binding data: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgresql.dev4devs.com kind: Database 2 name: sampledatabase version: v1alpha1 application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD 1 Specifies a list of service resources. 2 The CR of the database. 3 The sample application that points to a Deployment or any other similar resource with an embedded PodSpec. The output verifies that the ServiceBinding CR is created to project the binding data into the sample application. Example output servicebinding.binding.operators.coreos.com/spring-petclinic created Verify that the request for service binding is successful: USD oc get servicebindings -n my-petclinic Example output NAME READY REASON AGE spring-petclinic-postgresql True ApplicationsBound 47m By default, the values from the binding data of the database service are projected as files into the workload container that runs the sample application. For example, all the values from the Secret resource are projected into the bindings/spring-petclinic-pgcluster directory. Once this is created, you can go to the topology to see the visual connection. Figure 6.1. Connecting spring-petclinic to a sample database Set up the port forwarding from the application port to access the sample application from your local environment: USD oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic Example output Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080 Access http://localhost:8080 . You can now remotely access the Spring PetClinic sample application at localhost:8080 and see that the application is now connected to the database service. 6.5.5. Additional resources Installing Service Binding Operator Creating applications using the Developer perspective Managing resources from custom resource definitions 6.6. Exposing binding data from a service Application developers need access to backing services to build and connect workloads. Connecting workloads to backing services is always a challenge because each service provider requires a different way to access their secrets and consume them in a workload. The Service Binding Operator enables application developers to easily bind workloads together with operator-managed backing services, without any manual procedures to configure the binding connection. For the Service Binding Operator to provide the binding data, as an Operator provider or user who creates backing services, you must expose the binding data to be automatically detected by the Service Binding Operator. Then, the Service Binding Operator automatically collects the binding data from the backing service and shares it with a workload to provide a consistent and predictable experience. 6.6.1. Methods of exposing binding data This section describes the methods you can use to expose the binding data. Ensure that you know and understand your workload requirements and environment, and how it works with the provided services. Binding data is exposed under the following circumstances: Backing service is available as a provisioned service resource. The service you intend to connect to is compliant with the Service Binding specification. You must create a Secret resource with all the required binding data values and reference it in the backing service custom resource (CR). The detection of all the binding data values is automatic. Backing service is not available as a provisioned service resource. You must expose the binding data from the backing service. Depending on your workload requirements and environment, you can choose any of the following methods to expose the binding data: Direct secret reference Declaring binding data through custom resource definition (CRD) or CR annotations Detection of binding data through owned resources 6.6.1.1. Provisioned service Provisioned service represents a backing service CR with a reference to a Secret resource placed in the .status.binding.name field of the backing service CR. As an Operator provider or the user who creates backing services, you can use this method to be compliant with the Service Binding specification, by creating a Secret resource and referencing it in the .status.binding.name section of the backing service CR. This Secret resource must provide all the binding data values required for a workload to connect to the backing service. The following examples show an AccountService CR that represents a backing service and a Secret resource referenced from the CR. Example: AccountService CR apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service spec: ... status: binding: name: hippo-pguser-hippo Example: Referenced Secret resource apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: "<password>" user: "<username>" ... When creating a service binding resource, you can directly give the details of the AccountService resource in the ServiceBinding specification as follows: Example: ServiceBinding resource apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: ... services: - group: "example.com" version: v1alpha1 kind: AccountService name: prod-account-service application: name: spring-petclinic group: apps version: v1 resource: deployments Example: ServiceBinding resource in Specification API apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: ... service: apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service workload: apiVersion: apps/v1 kind: Deployment name: spring-petclinic This method exposes all the keys in the hippo-pguser-hippo referenced Secret resource as binding data that is to be projected into the workload. 6.6.1.2. Direct secret reference You can use this method, if all the required binding data values are available in a Secret resource that you can reference in your Service Binding definition. In this method, a ServiceBinding resource directly references a Secret resource to connect to a service. All the keys in the Secret resource are exposed as binding data. Example: Specification with the binding.operators.coreos.com API apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: ... services: - group: "" version: v1 kind: Secret name: hippo-pguser-hippo Example: Specification that is compliant with the servicebinding.io API apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: ... service: apiVersion: v1 kind: Secret name: hippo-pguser-hippo 6.6.1.3. Declaring binding data through CRD or CR annotations You can use this method to annotate the resources of the backing service to expose the binding data with specific annotations. Adding annotations under the metadata section alters the CRs and CRDs of the backing services. Service Binding Operator detects the annotations added to the CRs and CRDs and then creates a Secret resource with the values extracted based on the annotations. The following examples show the annotations that are added under the metadata section and a referenced ConfigMap object from a resource: Example: Exposing binding data from a Secret object defined in the CR annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret' ... The example places the name of the secret name in the {.metadata.name}-pguser-{.metadata.name} template that resolves to hippo-pguser-hippo . The template can contain multiple JSONPath expressions. Example: Referenced Secret object from a resource apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: "<password>" user: "<username>" Example: Exposing binding data from a ConfigMap object defined in the CR annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap' ... The example places the name of the config map in the {.metadata.name}-config template that resolves to hippo-config . The template can contain multiple JSONPath expressions. Example: Referenced ConfigMap object from a resource apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: "10s" user: "hippo" 6.6.1.4. Detection of binding data through owned resources You can use this method if your backing service owns one or more Kubernetes resources such as route, service, config map, or secret that you can use to detect the binding data. In this method, the Service Binding Operator detects the binding data from resources owned by the backing service CR. The following examples show the detectBindingResources API option set to true in the ServiceBinding CR: Example apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-detect-all namespace: my-petclinic spec: detectBindingResources: true services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments In the example, PostgresCluster custom service resource owns one or more Kubernetes resources such as route, service, config map, or secret. The Service Binding Operator automatically detects the binding data exposed on each of the owned resources. 6.6.2. Data model The data model used in the annotations follows specific conventions. Service binding annotations must use the following convention: service.binding(/<NAME>)?: "<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)" where: <NAME> Specifies the name under which the binding value is to be exposed. You can exclude it only when the objectType parameter is set to Secret or ConfigMap . <VALUE> Specifies the constant value exposed when no path is set. The data model provides the details on the allowed values and semantic for the path , elementType , objectType , sourceKey , and sourceValue parameters. Table 6.4. Parameters and their descriptions Parameter Description Default value path JSONPath template that consists JSONPath expressions enclosed by curly braces {}. N/A elementType Specifies whether the value of the element referenced in the path parameter complies with any one of the following types: string sliceOfStrings sliceOfMaps string objectType Specifies whether the value of the element indicated in the path parameter refers to a ConfigMap , Secret , or plain string in the current namespace. Secret , if elementType is non-string. sourceKey Specifies the key in the ConfigMap or Secret resource to be added to the binding secret when collecting the binding data. Note: When used in conjunction with elementType = sliceOfMaps , the sourceKey parameter specifies the key in the slice of maps whose value is used as a key in the binding secret. Use this optional parameter to expose a specific entry in the referenced Secret or ConfigMap resource as binding data. When not specified, all keys and values from the Secret or ConfigMap resource are exposed and are added to the binding secret. N/A sourceValue Specifies the key in the slice of maps. Note: The value of this key is used as the base to generate the value of the entry for the key-value pair to be added to the binding secret. In addition, the value of the sourceKey is used as the key of the entry for the key-value pair to be added to the binding secret. It is mandatory only if elementType = sliceOfMaps . N/A Note The sourceKey and sourceValue parameters are applicable only if the element indicated in the path parameter refers to a ConfigMap or Secret resource. 6.6.3. Setting annotations mapping to be optional You can have optional fields in the annotations. For example, a path to the credentials might not be present if the service endpoint does not require authentication. In such cases, a field might not exist in the target path of the annotations. As a result, Service Binding Operator generates an error, by default. As a service provider, to indicate whether you require annotations mapping, you can set a value for the optional flag in your annotations when enabling services. Service Binding Operator provides annotations mapping only if the target path is available. When the target path is not available, the Service Binding Operator skips the optional mapping and continues with the projection of the existing mappings without throwing any errors. Procedure To make a field in the annotations optional, set the optional flag value to true : Example apiVersion: apps.example.org/v1beta1 kind: Database metadata: name: my-db namespace: my-petclinic annotations: service.binding/username: path={.spec.name},optional=true ... Note If you set the optional flag value to false and the Service Binding Operator is unable to find the target path, the Operator fails the annotations mapping. If the optional flag has no value set, the Service Binding Operator considers the value as false by default and fails the annotations mapping. 6.6.4. RBAC requirements To expose the backing service binding data using the Service Binding Operator, you require certain Role-based access control (RBAC) permissions. Specify certain verbs under the rules field of the ClusterRole resource to grant the RBAC permissions for the backing service resources. When you define these rules , you allow the Service Binding Operator to read the binding data of the backing service resources throughout the cluster. If the users do not have permissions to read binding data or modify application resource, the Service Binding Operator prevents such users to bind services to application. Adhering to the RBAC requirements avoids unnecessary permission elevation for the user and prevents access to unauthorized services or applications. The Service Binding Operator performs requests against the Kubernetes API using a dedicated service account. By default, this account has permissions to bind services to workloads, both represented by the following standard Kubernetes or OpenShift objects: Deployments DaemonSets ReplicaSets StatefulSets DeploymentConfigs The Operator service account is bound to an aggregated cluster role, allowing Operator providers or cluster administrators to enable binding custom service resources to workloads. To grant the required permissions within a ClusterRole , label it with the servicebinding.io/controller flag and set the flag value to true . The following example shows how to allow the Service Binding Operator to get , watch , and list the custom resources (CRs) of Crunchy PostgreSQL Operator: Example: Enable binding to PostgreSQL database instances provisioned by Crunchy PostgreSQL Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: postgrescluster-reader labels: servicebinding.io/controller: "true" rules: - apiGroups: - postgres-operator.crunchydata.com resources: - postgresclusters verbs: - get - watch - list ... This cluster role can be deployed during the installation of the backing service Operator. 6.6.5. Categories of exposable binding data The Service Binding Operator enables you to expose the binding data values from the backing service resources and custom resource definitions (CRDs). This section provides examples to show how you can use the various categories of exposable binding data. You must modify these examples to suit your work environment and requirements. 6.6.5.1. Exposing a string from a resource The following example shows how to expose the string from the metadata.name field of the PostgresCluster custom resource (CR) as a username: Example apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name} ... 6.6.5.2. Exposing a constant value as the binding item The following examples show how to expose a constant value from the PostgresCluster custom resource (CR): Example: Exposing a constant value apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/type": "postgresql" 1 1 Binding type to be exposed with the postgresql value. 6.6.5.3. Exposing an entire config map or secret that is referenced from a resource The following examples show how to expose an entire secret through annotations: Example: Exposing an entire secret through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret' Example: The referenced secret from the backing service resource apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: "<password>" user: "<username>" 6.6.5.4. Exposing a specific entry from a config map or secret that is referenced from a resource The following examples show how to expose a specific entry from a config map through annotations: Example: Exposing an entry from a config map through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap,sourceKey=user' Example: The referenced config map from the backing service resource The binding data should have a key with name as db_timeout and value as 10s : apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: "10s" user: "hippo" 6.6.5.5. Exposing a resource definition value The following example shows how to expose a resource definition value through annotations: Example: Exposing a resource definition value through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name} ... 6.6.5.6. Exposing entries of a collection with the key and value from each entry The following example shows how to expose the entries of a collection with the key and value from each entry through annotations: Example: Exposing the entries of a collection through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/uri": "path={.status.connections},elementType=sliceOfMaps,sourceKey=type,sourceValue=url" spec: ... status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com The following example shows how the entries of a collection in annotations are projected into the bound application. Example: Binding data files /bindings/<binding-name>/uri_primary => primary.example.com /bindings/<binding-name>/uri_secondary => secondary.example.com /bindings/<binding-name>/uri_404 => black-hole.example.com Example: Configuration from a backing service resource status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com The example helps you to project all those values with keys such as primary , secondary , and so on. 6.6.5.7. Exposing items of a collection with one key per item The following example shows how to expose the items of a collection with one key per item through annotations: Example: Exposing the items of a collection through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/tags": "path={.spec.tags},elementType=sliceOfStrings" spec: tags: - knowledge - is - power The following example shows how the items of a collection in annotations are projected into the bound application. Example: Binding data files /bindings/<binding-name>/tags_0 => knowledge /bindings/<binding-name>/tags_1 => is /bindings/<binding-name>/tags_2 => power Example: Configuration from a backing service resource spec: tags: - knowledge - is - power 6.6.5.8. Exposing values of collection entries with one key per entry value The following example shows how to expose the values of collection entries with one key per entry value through annotations: Example: Exposing the values of collection entries through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/url": "path={.spec.connections},elementType=sliceOfStrings,sourceValue=url" spec: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com The following example shows how the values of a collection in annotations are projected into the bound application. Example: Binding data files /bindings/<binding-name>/url_0 => primary.example.com /bindings/<binding-name>/url_1 => secondary.example.com /bindings/<binding-name>/url_2 => black-hole.example.com 6.6.6. Additional resources Defining cluster service versions (CSVs) . Projecting binding data . 6.7. Projecting binding data This section provides information on how you can consume the binding data. 6.7.1. Consumption of binding data After the backing service exposes the binding data, for a workload to access and consume this data, you must project it into the workload from a backing service. Service Binding Operator automatically projects this set of data into the workload in the following methods: By default, as files. As environment variables, after you configure the .spec.bindAsFiles parameter from the ServiceBinding resource. 6.7.2. Configuration of the directory path to project the binding data inside workload container By default, Service Binding Operator mounts the binding data as files at a specific directory in your workload resource. You can configure the directory path using the SERVICE_BINDING_ROOT environment variable setup in the container where your workload runs. Example: Binding data mounted as files 1 Root directory. 2 5 Directory that stores the binding data. 3 Mandatory identifier that identifies the type of the binding data projected into the corresponding directory. 4 Optional: Identifier to identify the provider so that the application can identify the type of backing service it can connect to. To consume the binding data as environment variables, use the built-in language feature of your programming language of choice that can read environment variables. Example: Python client usage Warning For using the binding data directory name to look up the binding data Service Binding Operator uses the ServiceBinding resource name ( .metadata.name ) as the binding data directory name. The spec also provides a way to override that name through the .spec.name field. As a result, there is a chance for binding data name collision if there are multiple ServiceBinding resources in the namespace. However, due to the nature of the volume mount in Kubernetes, the binding data directory will contain values from only one of the Secret resources. 6.7.2.1. Computation of the final path for projecting the binding data as files The following table summarizes the configuration of how the final path for the binding data projection is computed when files are mounted at a specific directory: Table 6.5. Summary of the final path computation SERVICE_BINDING_ROOT Final path Not available /bindings/<ServiceBinding_ResourceName> dir/path/root dir/path/root/<ServiceBinding_ResourceName> In the table, the <ServiceBinding_ResourceName> entry specifies the name of the ServiceBinding resource that you configure in the .metadata.name section of the custom resource (CR). Note By default, the projected files get their permissions set to 0644. Service Binding Operator cannot set specific permissions due to a bug in Kubernetes that causes issues if the service expects specific permissions such as 0600 . As a workaround, you can modify the code of the program or the application that is running inside a workload resource to copy the file to the /tmp directory and set the appropriate permissions. To access and consume the binding data within the existing SERVICE_BINDING_ROOT environment variable, use the built-in language feature of your programming language of choice that can read environment variables. Example: Python client usage In the example, the bindings_list variable contains the binding data for the postgresql database service type. 6.7.3. Projecting the binding data Depending on your workload requirements and environment, you can choose to project the binding data either as files or environment variables. Prerequisites You understand the following concepts: Environment and requirements of your workload, and how it works with the provided services. Consumption of the binding data in your workload resource. Configuration of how the final path for data projection is computed for the default method. The binding data is exposed from the backing service. Procedure To project the binding data as files, determine the destination folder by ensuring that the existing SERVICE_BINDING_ROOT environment variable is present in the container where your workload runs. To project the binding data as environment variables, set the value for the .spec.bindAsFiles parameter to false from the ServiceBinding resource in the custom resource (CR). 6.7.4. Additional resources Exposing binding data from a service . Using the projected binding data in the source code of the application . 6.8. Binding workloads using Service Binding Operator Application developers must bind a workload to one or more backing services by using a binding secret. This secret is generated for the purpose of storing information to be consumed by the workload. As an example, consider that the service you want to connect to is already exposing the binding data. In this case, you would also need a workload to be used along with the ServiceBinding custom resource (CR). By using this ServiceBinding CR, the workload sends a binding request with the details of the services to bind with. Example of ServiceBinding CR apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: 2 name: spring-petclinic group: apps version: v1 resource: deployments 1 Specifies a list of service resources. 2 The sample application that points to a Deployment or any other similar resource with an embedded PodSpec. As shown in the example, you can also directly use a ConfigMap or a Secret itself as a service resource to be used as a source of binding data. 6.8.1. Naming strategies Naming strategies are available only for the binding.operators.coreos.com API group. Naming strategies use Go templates to help you define custom binding names through the service binding request. Naming strategies apply for all attributes including the mappings in the ServiceBinding custom resource (CR). A backing service projects the binding names as files or environment variables into the workload. If a workload expects the projected binding names in a particular format, but the binding names to be projected from the backing service are not available in that format, then you can change the binding names using naming strategies. Predefined post-processing functions While using naming strategies, depending on the expectations or requirements of your workload, you can use the following predefined post-processing functions in any combination to convert the character strings: upper : Converts the character strings into capital or uppercase letters. lower : Converts the character strings into lowercase letters. title : Converts the character strings where the first letter of each word is capitalized except for certain minor words. Predefined naming strategies Binding names declared through annotations are processed for their name change before their projection into the workload according to the following predefined naming strategies: none : When applied, there are no changes in the binding names. Example After the template compilation, the binding names take the {{ .name }} form. host: hippo-pgbouncer port: 5432 upper : Applied when no namingStrategy is defined. When applied, converts all the character strings of the binding name key into capital or uppercase letters. Example After the template compilation, the binding names take the {{ .service.kind | upper}}_{{ .name | upper }} form. DATABASE_HOST: hippo-pgbouncer DATABASE_PORT: 5432 If your workload requires a different format, you can define a custom naming strategy and change the binding name using a prefix and a separator, for example, PORT_DATABASE . Note When the binding names are projected as files, by default the predefined none naming strategy is applied, and the binding names do not change. When the binding names are projected as environment variables and no namingStrategy is defined, by default the predefined uppercase naming strategy is applied. You can override the predefined naming strategies by defining custom naming strategies using different combinations of custom binding names and predefined post-processing functions. 6.8.2. Advanced binding options You can define the ServiceBinding custom resource (CR) to use the following advanced binding options: Changing binding names: This option is available only for the binding.operators.coreos.com API group. Composing custom binding data: This option is available only for the binding.operators.coreos.com API group. Binding workloads using label selectors: This option is available for both the binding.operators.coreos.com and servicebinding.io API groups. 6.8.2.1. Changing the binding names before projecting them into the workload You can specify the rules to change the binding names in the .spec.namingStrategy attribute of the ServiceBinding CR. For example, consider a Spring PetClinic sample application that connects to the PostgreSQL database. In this case, the PostgreSQL database service exposes the host and port fields of the database to use for binding. The Spring PetClinic sample application can access this exposed binding data through the binding names. Example: Spring PetClinic sample application in the ServiceBinding CR ... application: name: spring-petclinic group: apps version: v1 resource: deployments ... Example: PostgreSQL database service in the ServiceBinding CR ... services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo ... If namingStrategy is not defined and the binding names are projected as environment variables, then the host: hippo-pgbouncer value in the backing service and the projected environment variable would appear as shown in the following example: Example DATABASE_HOST: hippo-pgbouncer where: DATABASE Specifies the kind backend service. HOST Specifies the binding name. After applying the POSTGRESQL_{{ .service.kind | upper }}_{{ .name | upper }}_ENV naming strategy, the list of custom binding names prepared by the service binding request appears as shown in the following example: Example POSTGRESQL_DATABASE_HOST_ENV: hippo-pgbouncer POSTGRESQL_DATABASE_PORT_ENV: 5432 The following items describe the expressions defined in the POSTGRESQL_{{ .service.kind | upper }}_{{ .name | upper }}_ENV naming strategy: .name : Refers to the binding name exposed by the backing service. In the example, the binding names are HOST and PORT . .service.kind : Refers to the kind of service resource whose binding names are changed with the naming strategy. upper : String function used to post-process the character string while compiling the Go template string. POSTGRESQL : Prefix of the custom binding name. ENV : Suffix of the custom binding name. Similar to the example, you can define the string templates in namingStrategy to define how each key of the binding names should be prepared by the service binding request. 6.8.2.2. Composing custom binding data As an application developer, you can compose custom binding data under the following circumstances: The backing service does not expose binding data. The values exposed are not available in the required format as expected by the workload. For example, consider a case where the backing service CR exposes the host, port, and database user as binding data, but the workload requires that the binding data be consumed as a connection string. You can compose custom binding data using attributes in the Kubernetes resource representing the backing service. Example apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo 1 id: postgresDB 2 - group: "" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: name: spring-petclinic group: apps version: v1 resource: deployments mappings: ## From the database service - name: JDBC_URL value: 'jdbc:postgresql://{{ .postgresDB.metadata.annotations.proxy }}:{{ .postgresDB.spec.port }}/{{ .postgresDB.metadata.name }}' ## From both the services! - name: CREDENTIALS value: '{{ .postgresDB.metadata.name }}{{ translationService.postgresSecret.data.password }}' ## Generate JSON - name: DB_JSON 3 value: {{ json .postgresDB.status }} 4 1 Name of the backing service resource. 2 Optional identifier. 3 The JSON name that the Service Binding Operator generates. The Service Binding Operator projects this JSON name as the name of a file or environment variable. 4 The JSON value that the Service Binding Operator generates. The Service Binding Operator projects this JSON value as a file or environment variable. The JSON value contains the attributes from your specified field of the backing service custom resource. 6.8.2.3. Binding workloads using a label selector You can use a label selector to specify the workload to bind. If you declare a service binding using the label selectors to pick up workloads, the Service Binding Operator periodically attempts to find and bind new workloads that match the given label selector. For example, as a cluster administrator, you can bind a service to every Deployment in a namespace with the environment: production label by setting an appropriate labelSelector field in the ServiceBinding CR. This enables the Service Binding Operator to bind each of these workloads with one ServiceBinding CR. Example ServiceBinding CR in the binding.operators.coreos.com/v1alpha1 API apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: application: labelSelector: 1 matchLabels: environment: production group: apps version: v1 resource: deployments services: group: "" version: v1 kind: Secret name: super-secret-data 1 Specifies the workload that is being bound. Example ServiceBinding CR in the servicebinding.io API apiVersion: servicebindings.io/v1beta1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: workload: selector: 1 matchLabels: environment: production apiVersion: app/v1 kind: Deployment service: apiVersion: v1 kind: Secret name: super-secret-data 1 Specifies the workload that is being bound. Important If you define the following pairs of fields, Service Binding Operator refuses the binding operation and generates an error: The name and labelSelector fields in the binding.operators.coreos.com/v1alpha1 API. The name and selector fields in the servicebinding.io API (Spec API). Understanding the rebinding behavior Consider a case where, after a successful binding, you use the name field to identify a workload. If you delete and recreate that workload, the ServiceBinding reconciler does not rebind the workload, and the Operator cannot project the binding data to the workload. However, if you use the labelSelector field to identify a workload, the ServiceBinding reconciler rebinds the workload, and the Operator projects the binding data. 6.8.3. Binding secondary workloads that are not compliant with PodSpec A typical scenario in service binding involves configuring the backing service, the workload (Deployment), and Service Binding Operator. Consider a scenario that involves a secondary workload (which can also be an application Operator) that is not compliant with PodSpec and is between the primary workload (Deployment) and Service Binding Operator. For such secondary workload resources, the location of the container path is arbitrary. For service binding, if the secondary workload in a CR is not compliant with the PodSpec, you must specify the location of the container path. Doing so projects the binding data into the container path specified in the secondary workload of the ServiceBinding custom resource (CR), for example, when you do not want the binding data inside a pod. In Service Binding Operator, you can configure the path of where containers or secrets reside within a workload and bind these paths at a custom location. 6.8.3.1. Configuring the custom location of the container path This custom location is available for the binding.operators.coreos.com API group when Service Binding Operator projects the binding data as environment variables. Consider a secondary workload CR, which is not compliant with the PodSpec and has containers located at the spec.containers path: Example: Secondary workload CR apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - name: hello-world image: quay.io/baijum/secondary-workload:latest ports: - containerPort: 8080 Procedure Configure the spec.containers path by specifying a value in the ServiceBinding CR and bind this path to a spec.application.bindingPath.containersPath custom location: Example: ServiceBinding CR with the spec.containers path in a custom location apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo id: postgresDB - group: "" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: 1 name: spring-petclinic group: apps version: v1 resource: deployments application: 2 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: containersPath: spec.containers 3 1 The sample application that points to a Deployment or any other similar resource with an embedded PodSpec. 2 The secondary workload, which is not compliant with the PodSpec. 3 The custom location of the container path. After you specify the location of the container path, Service Binding Operator generates the binding data, which becomes available in the container path specified in the secondary workload of the ServiceBinding CR. The following example shows the spec.containers path with the envFrom and secretRef fields: Example: Secondary workload CR with the envFrom and secretRef fields apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - env: 1 - name: ServiceBindingOperatorChangeTriggerEnvVar value: "31793" envFrom: - secretRef: name: secret-resource-name 2 image: quay.io/baijum/secondary-workload:latest name: hello-world ports: - containerPort: 8080 resources: {} 1 Unique array of containers with values generated by the Service Binding Operator. These values are based on the backing service CR. 2 Name of the Secret resource generated by the Service Binding Operator. 6.8.3.2. Configuring the custom location of the secret path This custom location is available for the binding.operators.coreos.com API group when Service Binding Operator projects the binding data as environment variables. Consider a secondary workload CR, which is not compliant with the PodSpec, with only the secret at the spec.secret path: Example: Secondary workload CR apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: "" Procedure Configure the spec.secret path by specifying a value in the ServiceBinding CR and bind this path at a spec.application.bindingPath.secretPath custom location: Example: ServiceBinding CR with the spec.secret path in a custom location apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: ... application: 1 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: secretPath: spec.secret 2 ... 1 The secondary workload, which is not compliant with the PodSpec. 2 The custom location of the secret path that contains the name of the Secret resource. After you specify the location of the secret path, Service Binding Operator generates the binding data, which becomes available in the secret path specified in the secondary workload of the ServiceBinding CR. The following example shows the spec.secret path with the binding-request value: Example: Secondary workload CR with the binding-request value ... apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: binding-request-72ddc0c540ab3a290e138726940591debf14c581 1 ... 1 The unique name of the Secret resource that Service Binding Operator generates. 6.8.3.3. Workload resource mapping Note Workload resource mapping is available for the secondary workloads of the ServiceBinding custom resource (CR) for both the API groups: binding.operators.coreos.com and servicebinding.io . You must define ClusterWorkloadResourceMapping resources only under the servicebinding.io API group. However, the ClusterWorkloadResourceMapping resources interact with ServiceBinding resources under both the binding.operators.coreos.com and servicebinding.io API groups. If you cannot configure custom path locations by using the configuration method for container path, you can define exactly where binding data needs to be projected. Specify where to project the binding data for a given workload kind by defining the ClusterWorkloadResourceMapping resources in the servicebinding.io API group. The following example shows how to define a mapping for the CronJob.batch/v1 resources. Example: Mapping for CronJob.batch/v1 resources apiVersion: servicebinding.io/v1beta1 kind: ClusterWorkloadResourceMapping metadata: name: cronjobs.batch 1 spec: versions: - version: "v1" 2 annotations: .spec.jobTemplate.spec.template.metadata.annotations 3 containers: - path: .spec.jobTemplate.spec.template.spec.containers[*] 4 - path: .spec.jobTemplate.spec.template.spec.initContainers[*] name: .name 5 env: .env 6 volumeMounts: .volumeMounts 7 volumes: .spec.jobTemplate.spec.template.spec.volumes 8 1 Name of the ClusterWorkloadResourceMapping resource, which must be qualified as the plural.group of the mapped workload resource. 2 Version of the resource that is being mapped. Any version that is not specified can be matched with the "*" wildcard. 3 Optional: Identifier of the .annotations field in a pod, specified with a fixed JSONPath. The default value is .spec.template.spec.annotations . 4 Identifier of the .containers and .initContainers fields in a pod, specified with a JSONPath. If no entries under the containers field are defined, the Service Binding Operator defaults to two paths: .spec.template.spec.containers[*] and .spec.template.spec.initContainers[\*] , with all other fields set as their default. However, if you specify an entry, then you must define the .path field. 5 Optional: Identifier of the .name field in a container, specified with a fixed JSONPath. The default value is .name . 6 Optional: Identifier of the .env field in a container, specified with a fixed JSONPath. The default value is .env . 7 Optional: Identifier of the .volumeMounts field in a container, specified with a fixed JSONPath. The default value is .volumeMounts . 8 Optional: Identifier of the .volumes field in a pod, specified with a fixed JSONPath. The default value is .spec.template.spec.volumes . Important In this context, a fixed JSONPath is a subset of the JSONPath grammar that accepts only the following operations: Field lookup: .spec.template Array indexing: .spec['template'] All other operations are not accepted. Most of these fields are optional. When they are not specified, the Service Binding Operator assumes defaults compatible with PodSpec resources. The Service Binding Operator requires that each of these fields is structurally equivalent to the corresponding field in a pod deployment. For example, the contents of the .env field in a workload resource must be able to accept the same structure of data that the .env field in a Pod resource would. Otherwise, projecting binding data into such a workload might result in unexpected behavior from the Service Binding Operator. Behavior specific to the binding.operators.coreos.com API group You can expect the following behaviors when ClusterWorkloadResourceMapping resources interact with ServiceBinding resources under the binding.operators.coreos.com API group: If a ServiceBinding resource with the bindAsFiles: false flag value is created together with one of these mappings, then environment variables are projected into the .envFrom field underneath each path field specified in the corresponding ClusterWorkloadResourceMapping resource. As a cluster administrator, you can specify both a ClusterWorkloadResourceMapping resource and the .spec.application.bindingPath.containersPath field in a ServiceBinding.bindings.coreos.com resource for binding purposes. The Service Binding Operator attempts to project binding data into the locations specified in both a ClusterWorkloadResourceMapping resource and the .spec.application.bindingPath.containersPath field. This behavior is equivalent to adding a container entry to the corresponding ClusterWorkloadResourceMapping resource with the path: USDcontainersPath attribute, with all other values taking their default value. 6.8.4. Unbinding workloads from a backing service You can unbind a workload from a backing service by using the oc tool. To unbind a workload from a backing service, delete the ServiceBinding custom resource (CR) linked to it: USD oc delete ServiceBinding <.metadata.name> Example USD oc delete ServiceBinding spring-petclinic-pgcluster where: spring-petclinic-pgcluster Specifies the name of the ServiceBinding CR. 6.8.5. Additional resources Binding a workload together with a backing service . Connecting the Spring PetClinic sample application to the PostgreSQL database service . Creating custom resources from a file Example schema of the ClusterWorkloadResourceMapping resource . 6.9. Connecting an application to a service using the Developer perspective In addition to grouping multiple components within an application, you can also use the Topology view to connect components with each other. You can either use a binding connector or a visual one to connect components. A binding connection between the components can be established only if the target node is an Operator-backed service. This is indicated by the Create a binding connector tool-tip which appears when you drag an arrow to such a target node. When an application is connected to a service by using a binding connector a ServiceBinding resource is created. Then, the Service Binding Operator controller projects the necessary binding data into the application deployment. After the request is successful, the application is redeployed establishing an interaction between the connected components. A visual connector establishes only a visual connection between the components, depicting an intent to connect. No interaction between the components is established. If the target node is not an Operator-backed service the Create a visual connector tool-tip is displayed when you drag an arrow to a target node. 6.9.1. Discovering and identifying Operator-backed bindable services As a user, if you want to create a bindable service, you must know which services are bindable. Bindable services are services that the applications can consume easily because they expose their binding data such as credentials, connection details, volume mounts, secrets, and other binding data in a standard way. The Developer perspective helps you discover and identify such bindable services. Procedure To discover and identify Operator-backed bindable services, consider the following alternative approaches: Click +Add Developer Catalog Operator Backed to see the Operator-backed tiles. Operator-backed services that support service binding features have a Bindable badge on the tiles. On the left pane of the Operator Backed page, select the Bindable checkbox. Tip Click the help icon to Service binding to see more information about bindable services. Click +Add Add and search for Operator-backed services. When you click the bindable service, you can view the Bindable badge in the side panel to the right. 6.9.2. Creating a visual connection between components You can depict an intent to connect application components by using the visual connector. This procedure walks you through an example of creating a visual connection between a PostgreSQL Database service and a Spring PetClinic sample application. Prerequisites You have created and deployed a Spring PetClinic sample application by using the Developer perspective. You have created and deployed a Crunchy PostgreSQL database instance by using the Developer perspective. This instance has the following components: hippo-backup , hippo-instance , hippo-repo-host , and hippo-pgbouncer . Procedure Hover over the Spring PetClinic sample application to see a dangling arrow on the node. Figure 6.2. Visual connector Click and drag the arrow towards the hippo-pgbouncer deployment to connect the Spring PetClinic sample application with it. Click the spring-petclinic deployment to see the Overview panel. Under the Details tab, click the edit icon in the Annotations section to see the Key = app.openshift.io/connects-to and Value = [{"apiVersion":"apps/v1","kind":"Deployment","name":"hippo-pgbouncer"}] annotation added to the deployment. Optional: You can repeat these steps to establish visual connections between other applications and components you create. Figure 6.3. Connecting multiple applications 6.9.3. Creating a binding connection between components You can establish a binding connection with Operator-backed components. This procedure walks through an example of creating a binding connection between a PostgreSQL Database service and a Spring PetClinic sample application. To create a binding connection with a service that is backed by the PostgreSQL Database Operator, you must first add the Red Hat-provided PostgreSQL Database Operator to the OperatorHub , and then install the Operator. The PostreSQL Database Operator then creates and manages the Database resource, which exposes the binding information in secrets, config maps, status, and spec attributes. Prerequisites You have created and deployed a Spring PetClinic sample application by using the Developer perspective. You have installed Service Binding Operator from the OperatorHub. You have installed the Crunchy Postgres for Kubernetes Operator from the OperatorHub by using the v5 Update channel. You have created and deployed a Crunchy PostgreSQL database instance by using the Developer perspective. This instance has the following components: hippo-backup , hippo-instance , hippo-repo-host , and hippo-pgbouncer . Procedure Switch to the Developer perspective and ensure that you are in the appropriate project, for example, my-petclinic . In the Topology view, hover over the Spring PetClinic sample application to see a dangling arrow on the node. Click and drag the arrow towards the hippo database in the Postgres Cluster to make a binding connection with the Spring PetClinic sample application. Enter the name and click Create . Figure 6.4. Service Binding dialog Alternatively, in the +Add view, click the YAML option to see the Import YAML screen. Use the YAML editor and add the ServiceBinding resource: apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments A service binding request is created and the Service Binding Operator controller projects the database service connection information into the application deployment as files by using a volume mount. After the request is successful, the application is redeployed and the connection is established. Figure 6.5. Binding connector Note You can also use the context menu by dragging the dangling arrow to add and create a binding connection to an operator-backed service. Figure 6.6. Context menu to create binding connection 6.9.4. Verifying the status of your service binding from the Topology view The Developer perspective helps you verify the status of your service binding through the Topology view. Procedure If a service binding was successful, click the binding connector. A side panel appears displaying the Connected status under the Details tab. Optionally, you can view the Connected status on the following pages from the Developer perspective: The ServiceBindings page. The ServiceBinding details page. In addition, the page title displays a Connected badge. If a service binding was unsuccessful, the binding connector shows a red arrowhead and a red cross in the middle of the connection. Click this connector to view the Error status in the side panel under the Details tab. Optionally, click the Error status to view specific information about the underlying problem. You can also view the Error status and a tooltip on the following pages from the Developer perspective: The ServiceBindings page. The ServiceBinding details page. In addition, the page title displays an Error badge. Tip In the ServiceBindings page, use the Filter dropdown to list the service bindings based on their status. 6.9.5. Additional resources Getting started with service binding . Known bindable Operators . | [
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.4-0 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi EOD",
"postgrescluster.postgres-operator.crunchydata.com/hippo created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE hippo-backup-9rxm-88rzq 0/1 Completed 0 2m2s hippo-instance1-6psd-0 4/4 Running 0 3m28s hippo-repo-host-0 2/2 Running 0 3m28s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc expose service spring-petclinic -n my-petclinic",
"route.route.openshift.io/spring-petclinic exposed",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster 2 name: hippo application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-pgcluster True ApplicationsBound 7s",
"for i in username password host port type; do oc exec -it deploy/spring-petclinic -n my-petclinic -- /bin/bash -c 'cd /tmp; find /bindings/*/'USDi' -exec echo -n {}:\" \" \\; -exec cat {} \\;'; echo; done",
"/bindings/spring-petclinic-pgcluster/username: <username> /bindings/spring-petclinic-pgcluster/password: <password> /bindings/spring-petclinic-pgcluster/host: hippo-primary.my-petclinic.svc /bindings/spring-petclinic-pgcluster/port: 5432 /bindings/spring-petclinic-pgcluster/type: postgresql",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"oc apply -f - << EOD --- apiVersion: v1 kind: Namespace metadata: name: my-petclinic --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: postgres-operator-group namespace: my-petclinic --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-multiarch-catalog namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/ibm/operator-registry-<architecture> 1 imagePullPolicy: IfNotPresent displayName: ibm-multiarch-catalog updateStrategy: registryPoll: interval: 30m --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: postgresql-operator-dev4devs-com namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: postgresql-operator-dev4devs-com source: ibm-multiarch-catalog sourceNamespace: openshift-marketplace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: database-view labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgresql.dev4devs.com resources: - databases verbs: - get - list EOD",
"oc get subs -n openshift-operators",
"NAME PACKAGE SOURCE CHANNEL postgresql-operator-dev4devs-com postgresql-operator-dev4devs-com ibm-multiarch-catalog alpha rh-service-binding-operator rh-service-binding-operator redhat-operators stable",
"oc apply -f - << EOD apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: sampledatabase namespace: my-petclinic annotations: host: sampledatabase type: postgresql port: \"5432\" service.binding/database: 'path={.spec.databaseName}' service.binding/port: 'path={.metadata.annotations.port}' service.binding/password: 'path={.spec.databasePassword}' service.binding/username: 'path={.spec.databaseUser}' service.binding/type: 'path={.metadata.annotations.type}' service.binding/host: 'path={.metadata.annotations.host}' spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: \"sampledb\" databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: \"samplepwd\" databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: \"sampleuser\" databaseUserKeyEnvVar: POSTGRESQL_USER image: registry.redhat.io/rhel8/postgresql-13:latest databaseStorageClassName: nfs-storage-provisioner size: 1 EOD",
"database.postgresql.dev4devs.com/sampledatabase created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE sampledatabase-cbc655488-74kss 0/1 Running 0 32s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres - name: org.springframework.cloud.bindings.boot.enable value: \"true\" ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgresql.dev4devs.com kind: Database 2 name: sampledatabase version: v1alpha1 application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-postgresql True ApplicationsBound 47m",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service spec: status: binding: name: hippo-pguser-hippo",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"example.com\" version: v1alpha1 kind: AccountService name: prod-account-service application: name: spring-petclinic group: apps version: v1 resource: deployments",
"apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service workload: apiVersion: apps/v1 kind: Deployment name: spring-petclinic",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-detect-all namespace: my-petclinic spec: detectBindingResources: true services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments",
"service.binding(/<NAME>)?: \"<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)\"",
"apiVersion: apps.example.org/v1beta1 kind: Database metadata: name: my-db namespace: my-petclinic annotations: service.binding/username: path={.spec.name},optional=true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: postgrescluster-reader labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgres-operator.crunchydata.com resources: - postgresclusters verbs: - get - watch - list",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/type\": \"postgresql\" 1",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap,sourceKey=user'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/uri\": \"path={.status.connections},elementType=sliceOfMaps,sourceKey=type,sourceValue=url\" spec: status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/uri_primary => primary.example.com /bindings/<binding-name>/uri_secondary => secondary.example.com /bindings/<binding-name>/uri_404 => black-hole.example.com",
"status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/tags\": \"path={.spec.tags},elementType=sliceOfStrings\" spec: tags: - knowledge - is - power",
"/bindings/<binding-name>/tags_0 => knowledge /bindings/<binding-name>/tags_1 => is /bindings/<binding-name>/tags_2 => power",
"spec: tags: - knowledge - is - power",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/url\": \"path={.spec.connections},elementType=sliceOfStrings,sourceValue=url\" spec: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/url_0 => primary.example.com /bindings/<binding-name>/url_1 => secondary.example.com /bindings/<binding-name>/url_2 => black-hole.example.com",
"USDSERVICE_BINDING_ROOT 1 ├── account-database 2 │ ├── type 3 │ ├── provider 4 │ ├── uri │ ├── username │ └── password └── transaction-event-stream 5 ├── type ├── connection-count ├── uri ├── certificates └── private-key",
"import os username = os.getenv(\"USERNAME\") password = os.getenv(\"PASSWORD\")",
"from pyservicebinding import binding try: sb = binding.ServiceBinding() except binding.ServiceBindingRootMissingError as msg: # log the error message and retry/exit print(\"SERVICE_BINDING_ROOT env var not set\") sb = binding.ServiceBinding() bindings_list = sb.bindings(\"postgresql\")",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: 2 name: spring-petclinic group: apps version: v1 resource: deployments",
"host: hippo-pgbouncer port: 5432",
"DATABASE_HOST: hippo-pgbouncer DATABASE_PORT: 5432",
"application: name: spring-petclinic group: apps version: v1 resource: deployments",
"services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo",
"DATABASE_HOST: hippo-pgbouncer",
"POSTGRESQL_DATABASE_HOST_ENV: hippo-pgbouncer POSTGRESQL_DATABASE_PORT_ENV: 5432",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo 1 id: postgresDB 2 - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: name: spring-petclinic group: apps version: v1 resource: deployments mappings: ## From the database service - name: JDBC_URL value: 'jdbc:postgresql://{{ .postgresDB.metadata.annotations.proxy }}:{{ .postgresDB.spec.port }}/{{ .postgresDB.metadata.name }}' ## From both the services! - name: CREDENTIALS value: '{{ .postgresDB.metadata.name }}{{ translationService.postgresSecret.data.password }}' ## Generate JSON - name: DB_JSON 3 value: {{ json .postgresDB.status }} 4",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: application: labelSelector: 1 matchLabels: environment: production group: apps version: v1 resource: deployments services: group: \"\" version: v1 kind: Secret name: super-secret-data",
"apiVersion: servicebindings.io/v1beta1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: workload: selector: 1 matchLabels: environment: production apiVersion: app/v1 kind: Deployment service: apiVersion: v1 kind: Secret name: super-secret-data",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - name: hello-world image: quay.io/baijum/secondary-workload:latest ports: - containerPort: 8080",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo id: postgresDB - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: 1 name: spring-petclinic group: apps version: v1 resource: deployments application: 2 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: containersPath: spec.containers 3",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - env: 1 - name: ServiceBindingOperatorChangeTriggerEnvVar value: \"31793\" envFrom: - secretRef: name: secret-resource-name 2 image: quay.io/baijum/secondary-workload:latest name: hello-world ports: - containerPort: 8080 resources: {}",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: \"\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: application: 1 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: secretPath: spec.secret 2",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: binding-request-72ddc0c540ab3a290e138726940591debf14c581 1",
"apiVersion: servicebinding.io/v1beta1 kind: ClusterWorkloadResourceMapping metadata: name: cronjobs.batch 1 spec: versions: - version: \"v1\" 2 annotations: .spec.jobTemplate.spec.template.metadata.annotations 3 containers: - path: .spec.jobTemplate.spec.template.spec.containers[*] 4 - path: .spec.jobTemplate.spec.template.spec.initContainers[*] name: .name 5 env: .env 6 volumeMounts: .volumeMounts 7 volumes: .spec.jobTemplate.spec.template.spec.volumes 8",
"oc delete ServiceBinding <.metadata.name>",
"oc delete ServiceBinding spring-petclinic-pgcluster",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/building_applications/connecting-applications-to-services |
4.4. Storage and Filesystems | 4.4. Storage and Filesystems The NFSv4 server in Red Hat Enterprise Linux 6 currently allows clients to mount using UDP and advertises NFSv4 over UDP with rpcbind. However, this configuration is not supported by Red Hat and violates the RFC 3530 standard. If a device-mapper-multipath device is still open, but all of the attached paths have been lost, the device is unable to create a new table with no paths. Consequently, the following unusual output may be returned from the multipath -ll output command: Output of this type indicates that there are no paths to the device. The erroneous lines in the output preceded by the string #:#:#:# will be removed in a future release. dracut currently only supports one FiberChannel over Ethernet (FCoE) connection to be used to boot from the root device. Consequently, booting from a root device that spans multiple FCoE devices (e.g. using RAID, LVM or similar techniques) is not possible. pvmove cannot currently be used to move mirror devices. However, it is possible to move mirror devices by issuing a sequence of two commands. For mirror images, add a new image on the destination PV and then remove the mirror image on the source PV. Mirror logs can be handled in a similar fashion: | [
"mpatha (3600a59a0000c2fd0003079284c122fec) dm-0, size=2.0G hwhandler='0' |-+- policy='round-robin 0' prio=0 status=enabled | `- #:#:#:# - #:# failed faulty running `-+- policy='round-robin 0' prio=0 status=enabled |- #:#:#:# - #:# failed faulty running `- #:#:#:# - #:# failed faulty running",
"USD> lvconvert -m +1 <vg/lv> <new PV> USD> lvconvert -m -1 <vg/lv> <old PV>",
"USD> lvconvert --mirrorlog core <vg/lv> USD> lvconvert --mirrorlog disk <vg/lv> <new PV>",
"or USD> lvconvert --mirrorlog mirrored <vg/lv> <new PV> USD> lvconvert --mirrorlog disk <vg/lv> <old PV>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/ar01s04s04 |
Chapter 3. Installing the client-side tools | Chapter 3. Installing the client-side tools Before you deploy the overcloud, you need to determine the configuration settings to apply to each client. Copy the example environment files from the heat template collection and modify the files to suit your environment. 3.1. Setting centralized logging client parameters For more information, see Enabling centralized logging with Elasticsearch in the Logging, Monitoring, and Troubleshooting guide. 3.2. Setting monitoring client parameters The monitoring solution collects system information periodically and provides a mechanism to store and monitor the values in a variety of ways using a data collecting agent. Red Hat supports collectd as a collection agent. Collectd-sensubility is an extension of collectd and communicates with Sensu server side through RabbitMQ. You can use Service Telemetry Framework (STF) to store the data, and in turn, monitor systems, find performance bottlenecks, and predict future system load. For more information about Service Telemetry Framework, see the Service Telemetry Framework 1.3 guide. To configure collectd and collectd-sensubility, complete the following steps: Create config.yaml in your home directory, for example, /home/templates/custom , and configure the MetricsQdrConnectors parameter to point to STF server side: MetricsQdrConnectors: - host: qdr-normal-sa-telemetry.apps.remote.tld port: 443 role: inter-router sslProfile: sslProfile verifyHostname: false MetricsQdrSSLProfiles: - name: sslProfile In the config.yaml file, list the plug-ins you want to use under CollectdExtraPlugins . You can also provide parameters in the ExtraConfig section. By default, collectd comes with the cpu , df , disk , hugepages , interface , load , memory , processes , tcpconns , unixsock , and uptime plug-ins. You can add additional plug-ins using the CollectdExtraPlugins parameter. You can also provide additional configuration information for the CollectdExtraPlugins using the ExtraConfig option. For example, to enable the virt plug-in, and configure the connection string and the hostname format, use the following syntax: parameter_defaults: CollectdExtraPlugins: - disk - df - virt ExtraConfig: collectd::plugin::virt::connection: "qemu:///system" collectd::plugin::virt::hostname_format: "hostname uuid" Note Do not remove the unixsock plug-in. Removal results in the permanent marking of the collectd container as unhealthy. Optional: To collect metric and event data through AMQ Interconnect, add the line MetricsQdrExternalEndpoint: true to the config.yaml file: To enable collectd-sensubility, add the following environment configuration to the config.yaml file: parameter_defaults: CollectdEnableSensubility: true # Use this if there is restricted access for your checks by using the sudo command. # The rule will be created in /etc/sudoers.d for sensubility to enable it calling restricted commands via sensubility executor. CollectdSensubilityExecSudoRule: "collectd ALL = NOPASSWD: <some command or ALL for all commands>" # Connection URL to Sensu server side for reporting check results. CollectdSensubilityConnection: "amqp://sensu:sensu@<sensu server side IP>:5672//sensu" # Interval in seconds for sending keepalive messages to Sensu server side. CollectdSensubilityKeepaliveInterval: 20 # Path to temporary directory where the check scripts are created. CollectdSensubilityTmpDir: /var/tmp/collectd-sensubility-checks # Path to shell used for executing check scripts. CollectdSensubilityShellPath: /usr/bin/sh # To improve check execution rate use this parameter and value to change the number of goroutines spawned for executing check scripts. CollectdSensubilityWorkerCount: 2 # JSON-formatted definition of standalone checks to be scheduled on client side. If you need to schedule checks # on overcloud nodes instead of Sensu server, use this parameter. Configuration is compatible with Sensu check definition. # For more information, see https://docs.sensu.io/sensu-core/1.7/reference/checks/#check-definition-specification # There are some configuration options which sensubility ignores such as: extension, publish, cron, stdin, hooks. CollectdSensubilityChecks: example: command: "ping -c1 -W1 8.8.8.8" interval: 30 # The following parameters are used to modify standard, standalone checks for monitoring container health on overcloud nodes. # Do not modify these parameters. # CollectdEnableContainerHealthCheck: true # CollectdContainerHealthCheckCommand: <snip> # CollectdContainerHealthCheckInterval: 10 # The Sensu server side event handler to use for events created by the container health check. # CollectdContainerHealthCheckHandlers: # - handle-container-health-check # CollectdContainerHealthCheckOccurrences: 3 # CollectdContainerHealthCheckRefresh: 90 Deploy the overcloud. Include config.yaml , collectd-write-qdr.yaml , and one of the qdr-*.yaml files in your overcloud deploy command: Optional: To enable overcloud RabbitMQ monitoring, include the collectd-read-rabbitmq.yaml file in the overcloud deploy command. Additional resources For more information about the YAML files, see Section 3.5, "YAML files" . For more information about collectd plug-ins, see Section 3.4, "Collectd plug-in configurations" . For more information about Service Telemetry Framework, see the Service Telemetry Framework 1.3 guide. 3.3. Collecting data through AMQ Interconnect To subscribe to the available AMQ Interconnect addresses for metric and event data consumption, create an environment file to expose AMQ Interconnect for client connections, and deploy the overcloud. Note The Service Telemetry Operator simplifies the deployment of all data ingestion and data storage components for single cloud deployments. To share the data storage domain with multiple clouds, see Configuring multiple clouds in the Service Telemetry Framework 1.3 guide. Warning It is not possible to switch between QDR mesh mode and QDR edge mode, as used by the Service Telemetry Framework (STF). Additionally, it is not possible to use QDR mesh mode if you enable data collection for STF. Procedure Log on to the Red Hat OpenStack Platform undercloud as the stack user. Create a configuration file called data-collection.yaml in the /home/stack directory. To enable external endpoints, add the MetricsQdrExternalEndpoint: true parameter to the data-collection.yaml file: parameter_defaults: MetricsQdrExternalEndpoint: true To enable collectd and AMQ Interconnect, add the following files to your Red Hat OpenStack Platform director deployment: the data-collection.yaml environment file the qdr-form-controller-mesh.yaml file that enables the client side AMQ Interconnect to connect to the external endpoints openstack overcloud deploy <other arguments> --templates /usr/share/openstack-tripleo-heat-templates \ --environment-file <...other-environment-files...> \ --environment-file /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-form-controller-mesh.yaml \ --environment-file /home/stack/data-collection.yaml Optional: To collect Ceilometer and collectd events, include ceilometer-write-qdr.yaml and collectd-write-qdr.yaml file in your overcloud deploy command. Deploy the overcloud. Additional resources For more information about the YAML files, see Section 3.5, "YAML files" . 3.4. Collectd plug-in configurations There are many configuration possibilities of Red Hat OpenStack Platform director. You can configure multiple collectd plug-ins to suit your environment. Each documented plug-in has a description and example configuration. Some plug-ins have a table of metrics that you can query for from Grafana or Prometheus, and a list of options you can configure, if available. Additional resources To view a complete list of collectd plugin options, see collectd plugins in the Service Telemetry Framework guide. 3.5. YAML files You can include the following YAML files in your overcloud deploy command when you configure collectd: collectd-read-rabbitmq.yaml : Enables and configures python-collect-rabbitmq to monitor the overcloud RabbitMQ instance. collectd-write-qdr.yaml : Enables collectd to send telemetry and notification data through AMQ Interconnect. qdr-edge-only.yaml : Enables deployment of AMQ Interconnect. Each overcloud node has one local qdrouterd service running and operating in edge mode. For example, sending received data straight to defined MetricsQdrConnectors . qdr-form-controller-mesh.yaml : Enables deployment of AMQ Interconnect. Each overcloud node has one local qdrouterd service forming a mesh topology. For example, AMQ Interconnect routers on controllers operate in interior router mode, with connections to defined MetricsQdrConnectors , and AMQ Interconnect routers on other node types connect in edge mode to the interior routers running on the controllers. Additional resources For more information about configuring collectd, see Section 3.2, "Setting monitoring client parameters" . | [
"MetricsQdrConnectors: - host: qdr-normal-sa-telemetry.apps.remote.tld port: 443 role: inter-router sslProfile: sslProfile verifyHostname: false MetricsQdrSSLProfiles: - name: sslProfile",
"parameter_defaults: CollectdExtraPlugins: - disk - df - virt ExtraConfig: collectd::plugin::virt::connection: \"qemu:///system\" collectd::plugin::virt::hostname_format: \"hostname uuid\"",
"parameter_defaults: MetricsQdrExternalEndpoint: true",
"parameter_defaults: CollectdEnableSensubility: true # Use this if there is restricted access for your checks by using the sudo command. # The rule will be created in /etc/sudoers.d for sensubility to enable it calling restricted commands via sensubility executor. CollectdSensubilityExecSudoRule: \"collectd ALL = NOPASSWD: <some command or ALL for all commands>\" # Connection URL to Sensu server side for reporting check results. CollectdSensubilityConnection: \"amqp://sensu:sensu@<sensu server side IP>:5672//sensu\" # Interval in seconds for sending keepalive messages to Sensu server side. CollectdSensubilityKeepaliveInterval: 20 # Path to temporary directory where the check scripts are created. CollectdSensubilityTmpDir: /var/tmp/collectd-sensubility-checks # Path to shell used for executing check scripts. CollectdSensubilityShellPath: /usr/bin/sh # To improve check execution rate use this parameter and value to change the number of goroutines spawned for executing check scripts. CollectdSensubilityWorkerCount: 2 # JSON-formatted definition of standalone checks to be scheduled on client side. If you need to schedule checks # on overcloud nodes instead of Sensu server, use this parameter. Configuration is compatible with Sensu check definition. # For more information, see https://docs.sensu.io/sensu-core/1.7/reference/checks/#check-definition-specification # There are some configuration options which sensubility ignores such as: extension, publish, cron, stdin, hooks. CollectdSensubilityChecks: example: command: \"ping -c1 -W1 8.8.8.8\" interval: 30 # The following parameters are used to modify standard, standalone checks for monitoring container health on overcloud nodes. # Do not modify these parameters. # CollectdEnableContainerHealthCheck: true # CollectdContainerHealthCheckCommand: <snip> # CollectdContainerHealthCheckInterval: 10 # The Sensu server side event handler to use for events created by the container health check. # CollectdContainerHealthCheckHandlers: # - handle-container-health-check # CollectdContainerHealthCheckOccurrences: 3 # CollectdContainerHealthCheckRefresh: 90",
"openstack overcloud deploy -e /home/templates/custom/config.yaml -e tripleo-heat-templates/environments/metrics/collectd-write-qdr.yaml -e tripleo-heat-templates/environments/metrics/qdr-form-controller-mesh.yaml",
"parameter_defaults: MetricsQdrExternalEndpoint: true",
"openstack overcloud deploy <other arguments> --templates /usr/share/openstack-tripleo-heat-templates --environment-file <...other-environment-files...> --environment-file /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-form-controller-mesh.yaml --environment-file /home/stack/data-collection.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/monitoring_tools_configuration_guide/sect-Client-tools |
Chapter 4. Configuring Red Hat Quay | Chapter 4. Configuring Red Hat Quay Before running the Red Hat Quay service as a container, you need to use that same Quay container to create the configuration file ( config.yaml ) needed to deploy Red Hat Quay. To do that, you pass a config argument and a password (replace my-secret-password here) to the Quay container. Later, you use that password to log into the configuration tool as the user quayconfig . Here's an example of how to do that: Start quay in setup mode : On the first quay node, run the following: Open browser : When the quay configuration tool starts up, open a browser to the URL and port 8080 of the system you are running the configuration tool on (for example http://myquay.example.com:8080 ). You are prompted for a username and password. Log in as quayconfig : When prompted, enter the quayconfig username and password (the one from the podman run command line). Fill in the required fields : When you start the config tool without mounting an existing configuration bundle, you will be booted into an initial setup session. In a setup session, default values will be filled automatically. The following steps will walk through how to fill out the remaining required fields. Identify the database : For the initial setup, you must include the following information about the type and location of the database to be used by Red Hat Quay: Database Type : Choose MySQL or PostgreSQL. MySQL will be used in the basic example; PostgreSQL is used with the high availability Red Hat Quay on OpenShift examples. Database Server : Identify the IP address or hostname of the database, along with the port number if it is different from 3306. Username : Identify a user with full access to the database. Password : Enter the password you assigned to the selected user. Database Name : Enter the database name you assigned when you started the database server. SSL Certificate : For production environments, you should provide an SSL certificate to connect to the database. The following figure shows an example of the screen for identifying the database used by Red Hat Quay: Identify the Redis hostname, Server Configuration and add other desired settings : Other setting you can add to complete the setup are as follows. More settings for high availability Red Hat Quay deployment that for the basic deployment: For the basic, test configuration, identifying the Redis Hostname should be all you need to do. However, you can add other features, such as Clair Scanning and Repository Mirroring, as described at the end of this procedure. For the high availability and OpenShift configurations, more settings are needed (as noted below) to allow for shared storage, secure communications between systems, and other features. Here are the settings you need to consider: Custom SSL Certificates : Upload custom or self-signed SSL certificates for use by Red Hat Quay. See Using SSL to protect connections to Red Hat Quay for details. Recommended for high availability. Important Using SSL certificates is recommended for both basic and high availability deployments. If you decide to not use SSL, you must configure your container clients to use your new Red Hat Quay setup as an insecure registry as described in Test an Insecure Registry . Basic Configuration : Upload a company logo to rebrand your Red Hat Quay registry. Server Configuration : Hostname or IP address to reach the Red Hat Quay service, along with TLS indication (recommended for production installations). The Server Hostname is required for all Red Hat Quay deployments. TLS termination can be done in two different ways: On the instance itself, with all TLS traffic governed by the nginx server in the Quay container (recommended). On the load balancer. This is not recommended. Access to Red Hat Quay could be lost if the TLS setup is not done correctly on the load balancer. Data Consistency Settings : Select to relax logging consistency guarantees to improve performance and availability. Time Machine : Allow older image tags to remain in the repository for set periods of time and allow users to select their own tag expiration times. redis : Identify the hostname or IP address (and optional password) to connect to the redis service used by Red Hat Quay. Repository Mirroring : Choose the checkbox to Enable Repository Mirroring. With this enabled, you can create repositories in your Red Hat Quay cluster that mirror selected repositories from remote registries. Before you can enable repository mirroring, start the repository mirroring worker as described later in this procedure. Registry Storage : Identify the location of storage. A variety of cloud and local storage options are available. Remote storage is required for high availability. Identify the Ceph storage location if you are following the example for Red Hat Quay high availability storage. On OpenShift, the example uses Amazon S3 storage. Action Log Storage Configuration : Action logs are stored in the Red Hat Quay database by default. If you have a large amount of action logs, you can have those logs directed to Elasticsearch for later search and analysis. To do this, change the value of Action Logs Storage to Elasticsearch and configure related settings as described in Configure action log storage . Action Log Rotation and Archiving : Select to enable log rotation, which moves logs older than 30 days into storage, then indicate storage area. Security Scanner : Enable security scanning by selecting a security scanner endpoint and authentication key. To setup Clair to do image scanning, refer to Clair Setup and Configuring Clair . Recommended for high availability. Application Registry : Enable an additional application registry that includes things like Kubernetes manifests or Helm charts (see the App Registry specification ). rkt Conversion : Allow rkt fetch to be used to fetch images from Red Hat Quay registry. Public and private GPG2 keys are needed. This field is deprecated. E-mail : Enable e-mail to use for notifications and user password resets. Internal Authentication : Change default authentication for the registry from Local Database to LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token. External Authorization (OAuth) : Enable to allow GitHub or GitHub Enterprise to authenticate to the registry. Google Authentication : Enable to allow Google to authenticate to the registry. Access Settings : Basic username/password authentication is enabled by default. Other authentication types that can be enabled include: external application tokens (user-generated tokens used with docker or rkt commands), anonymous access (enable for public access to anyone who can get to the registry), user creation (let users create their own accounts), encrypted client password (require command-line user access to include encrypted passwords), and prefix username autocompletion (disable to require exact username matches on autocompletion). Registry Protocol Settings : Leave the Restrict V1 Push Support checkbox enabled to restrict access to Docker V1 protocol pushes. Although Red Hat recommends against enabling Docker V1 push protocol, if you do allow it, you must explicitly whitelist the namespaces for which it is enabled. Dockerfile Build Support : Enable to allow users to submit Dockerfiles to be built and pushed to Red Hat Quay. This is not recommended for multitenant environments. Validate the changes : Select Validate Configuration Changes . If validation is successful, you will be presented with the following Download Configuration modal: Download configuration : Select the Download Configuration button and save the tarball ( quay-config.tar.gz ) to a local directory to use later to start Red Hat Quay. At this point, you can shutdown the Red Hat Quay configuration tool and close your browser. , copy the tarball file to the system on which you want to install your first Red Hat Quay node. For a basic install, you might just be running Red Hat Quay on the same system. | [
"sudo podman run --rm -it --name quay_config -p 8080:8080 registry.redhat.io/quay/quay-rhel8:v3.12.8 config my-secret-password"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploy_red_hat_quay_-_high_availability/configuring_red_hat_quay |
Chapter 8. Sources | Chapter 8. Sources The updated Red Hat Ceph Storage source code packages are available at the following location: For Red Hat Enterprise Linux 8: http://ftp.redhat.com/redhat/linux/enterprise/8Base/en/RHCEPH/SRPMS/ | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/release_notes/sources |
Chapter 7. NVIDIA GPU architecture overview | Chapter 7. NVIDIA GPU architecture overview NVIDIA supports the use of graphics processing unit (GPU) resources on OpenShift Container Platform. OpenShift Container Platform is a security-focused and hardened Kubernetes platform developed and supported by Red Hat for deploying and managing Kubernetes clusters at scale. OpenShift Container Platform includes enhancements to Kubernetes so that users can easily configure and use NVIDIA GPU resources to accelerate workloads. The NVIDIA GPU Operator leverages the Operator framework within OpenShift Container Platform to manage the full lifecycle of NVIDIA software components required to run GPU-accelerated workloads. These components include the NVIDIA drivers (to enable CUDA), the Kubernetes device plugin for GPUs, the NVIDIA Container Toolkit, automatic node tagging using GPU feature discovery (GFD), DCGM-based monitoring, and others. Note The NVIDIA GPU Operator is only supported by NVIDIA. For more information about obtaining support from NVIDIA, see Obtaining Support from NVIDIA . 7.1. NVIDIA GPU prerequisites A working OpenShift cluster with at least one GPU worker node. Access to the OpenShift cluster as a cluster-admin to perform the required steps. OpenShift CLI ( oc ) is installed. The node feature discovery (NFD) Operator is installed and a nodefeaturediscovery instance is created. 7.2. NVIDIA GPU enablement The following diagram shows how the GPU architecture is enabled for OpenShift: Figure 7.1. NVIDIA GPU enablement Note MIG is only supported with A30, A100, A100X, A800, AX800, H100, and H800. 7.2.1. GPUs and bare metal You can deploy OpenShift Container Platform on an NVIDIA-certified bare metal server but with some limitations: Control plane nodes can be CPU nodes. Worker nodes must be GPU nodes, provided that AI/ML workloads are executed on these worker nodes. In addition, the worker nodes can host one or more GPUs, but they must be of the same type. For example, a node can have two NVIDIA A100 GPUs, but a node with one A100 GPU and one T4 GPU is not supported. The NVIDIA Device Plugin for Kubernetes does not support mixing different GPU models on the same node. When using OpenShift, note that one or three or more servers are required. Clusters with two servers are not supported. The single server deployment is called single node openShift (SNO) and using this configuration results in a non-high availability OpenShift environment. You can choose one of the following methods to access the containerized GPUs: GPU passthrough Multi-Instance GPU (MIG) Additional resources Red Hat OpenShift on Bare Metal Stack 7.2.2. GPUs and virtualization Many developers and enterprises are moving to containerized applications and serverless infrastructures, but there is still a lot of interest in developing and maintaining applications that run on virtual machines (VMs). Red Hat OpenShift Virtualization provides this capability, enabling enterprises to incorporate VMs into containerized workflows within clusters. You can choose one of the following methods to connect the worker nodes to the GPUs: GPU passthrough to access and use GPU hardware within a virtual machine (VM). GPU (vGPU) time-slicing, when GPU compute capacity is not saturated by workloads. Additional resources NVIDIA GPU Operator with OpenShift Virtualization 7.2.3. GPUs and vSphere You can deploy OpenShift Container Platform on an NVIDIA-certified VMware vSphere server that can host different GPU types. An NVIDIA GPU driver must be installed in the hypervisor in case vGPU instances are used by the VMs. For VMware vSphere, this host driver is provided in the form of a VIB file. The maximum number of vGPUS that can be allocated to worker node VMs depends on the version of vSphere: vSphere 7.0: maximum 4 vGPU per VM vSphere 8.0: maximum 8 vGPU per VM Note vSphere 8.0 introduced support for multiple full or fractional heterogenous profiles associated with a VM. You can choose one of the following methods to attach the worker nodes to the GPUs: GPU passthrough for accessing and using GPU hardware within a virtual machine (VM) GPU (vGPU) time-slicing, when not all of the GPU is needed Similar to bare metal deployments, one or three or more servers are required. Clusters with two servers are not supported. Additional resources OpenShift Container Platform on VMware vSphere with NVIDIA vGPUs 7.2.4. GPUs and Red Hat KVM You can use OpenShift Container Platform on an NVIDIA-certified kernel-based virtual machine (KVM) server. Similar to bare-metal deployments, one or three or more servers are required. Clusters with two servers are not supported. However, unlike bare-metal deployments, you can use different types of GPUs in the server. This is because you can assign these GPUs to different VMs that act as Kubernetes nodes. The only limitation is that a Kubernetes node must have the same set of GPU types at its own level. You can choose one of the following methods to access the containerized GPUs: GPU passthrough for accessing and using GPU hardware within a virtual machine (VM) GPU (vGPU) time-slicing when not all of the GPU is needed To enable the vGPU capability, a special driver must be installed at the host level. This driver is delivered as a RPM package. This host driver is not required at all for GPU passthrough allocation. Additional resources How To Deploy OpenShift Container Platform 4.13 on KVM 7.2.5. GPUs and CSPs You can deploy OpenShift Container Platform to one of the major cloud service providers (CSPs): Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. Two modes of operation are available: a fully managed deployment and a self-managed deployment. In a fully managed deployment, everything is automated by Red Hat in collaboration with CSP. You can request an OpenShift instance through the CSP web console, and the cluster is automatically created and fully managed by Red Hat. You do not have to worry about node failures or errors in the environment. Red Hat is fully responsible for maintaining the uptime of the cluster. The fully managed services are available on AWS and Azure. For AWS, the OpenShift service is called ROSA (Red Hat OpenShift Service on AWS). For Azure, the service is called Azure Red Hat OpenShift. In a self-managed deployment, you are responsible for instantiating and maintaining the OpenShift cluster. Red Hat provides the OpenShift-install utility to support the deployment of the OpenShift cluster in this case. The self-managed services are available globally to all CSPs. It is important that this compute instance is a GPU-accelerated compute instance and that the GPU type matches the list of supported GPUs from NVIDIA AI Enterprise. For example, T4, V100, and A100 are part of this list. You can choose one of the following methods to access the containerized GPUs: GPU passthrough to access and use GPU hardware within a virtual machine (VM). GPU (vGPU) time slicing when the entire GPU is not required. Additional resources Red Hat Openshift in the Cloud 7.2.6. GPUs and Red Hat Device Edge Red Hat Device Edge provides access to MicroShift. MicroShift provides the simplicity of a single-node deployment with the functionality and services you need for resource-constrained (edge) computing. Red Hat Device Edge meets the needs of bare-metal, virtual, containerized, or Kubernetes workloads deployed in resource-constrained environments. You can enable NVIDIA GPUs on containers in a Red Hat Device Edge environment. You use GPU passthrough to access the containerized GPUs. Additional resources How to accelerate workloads with NVIDIA GPUs on Red Hat Device Edge 7.3. GPU sharing methods Red Hat and NVIDIA have developed GPU concurrency and sharing mechanisms to simplify GPU-accelerated computing on an enterprise-level OpenShift Container Platform cluster. Applications typically have different compute requirements that can leave GPUs underutilized. Providing the right amount of compute resources for each workload is critical to reduce deployment cost and maximize GPU utilization. Concurrency mechanisms for improving GPU utilization exist that range from programming model APIs to system software and hardware partitioning, including virtualization. The following list shows the GPU concurrency mechanisms: Compute Unified Device Architecture (CUDA) streams Time-slicing CUDA Multi-Process Service (MPS) Multi-instance GPU (MIG) Virtualization with vGPU Consider the following GPU sharing suggestions when using the GPU concurrency mechanisms for different OpenShift Container Platform scenarios: Bare metal vGPU is not available. Consider using MIG-enabled cards. VMs vGPU is the best choice. Older NVIDIA cards with no MIG on bare metal Consider using time-slicing. VMs with multiple GPUs and you want passthrough and vGPU Consider using separate VMs. Bare metal with OpenShift Virtualization and multiple GPUs Consider using pass-through for hosted VMs and time-slicing for containers. Additional resources Improving GPU Utilization 7.3.1. CUDA streams Compute Unified Device Architecture (CUDA) is a parallel computing platform and programming model developed by NVIDIA for general computing on GPUs. A stream is a sequence of operations that executes in issue-order on the GPU. CUDA commands are typically executed sequentially in a default stream and a task does not start until a preceding task has completed. Asynchronous processing of operations across different streams allows for parallel execution of tasks. A task issued in one stream runs before, during, or after another task is issued into another stream. This allows the GPU to run multiple tasks simultaneously in no prescribed order, leading to improved performance. Additional resources Asynchronous Concurrent Execution 7.3.2. Time-slicing GPU time-slicing interleaves workloads scheduled on overloaded GPUs when you are running multiple CUDA applications. You can enable time-slicing of GPUs on Kubernetes by defining a set of replicas for a GPU, each of which can be independently distributed to a pod to run workloads on. Unlike multi-instance GPU (MIG), there is no memory or fault isolation between replicas, but for some workloads this is better than not sharing at all. Internally, GPU time-slicing is used to multiplex workloads from replicas of the same underlying GPU. You can apply a cluster-wide default configuration for time-slicing. You can also apply node-specific configurations. For example, you can apply a time-slicing configuration only to nodes with Tesla T4 GPUs and not modify nodes with other GPU models. You can combine these two approaches by applying a cluster-wide default configuration and then labeling nodes to give those nodes a node-specific configuration. 7.3.3. CUDA Multi-Process Service CUDA Multi-Process Service (MPS) allows a single GPU to use multiple CUDA processes. The processes run in parallel on the GPU, eliminating saturation of the GPU compute resources. MPS also enables concurrent execution, or overlapping, of kernel operations and memory copying from different processes to enhance utilization. Additional resources CUDA MPS 7.3.4. Multi-instance GPU Using Multi-instance GPU (MIG), you can split GPU compute units and memory into multiple MIG instances. Each of these instances represents a standalone GPU device from a system perspective and can be connected to any application, container, or virtual machine running on the node. The software that uses the GPU treats each of these MIG instances as an individual GPU. MIG is useful when you have an application that does not require the full power of an entire GPU. The MIG feature of the new NVIDIA Ampere architecture enables you to split your hardware resources into multiple GPU instances, each of which is available to the operating system as an independent CUDA-enabled GPU. NVIDIA GPU Operator version 1.7.0 and higher provides MIG support for the A100 and A30 Ampere cards. These GPU instances are designed to support up to seven multiple independent CUDA applications so that they operate completely isolated with dedicated hardware resources. Additional resources NVIDIA Multi-Instance GPU User Guide 7.3.5. Virtualization with vGPU Virtual machines (VMs) can directly access a single physical GPU using NVIDIA vGPU. You can create virtual GPUs that can be shared by VMs across the enterprise and accessed by other devices. This capability combines the power of GPU performance with the management and security benefits provided by vGPU. Additional benefits provided by vGPU includes proactive management and monitoring for your VM environment, workload balancing for mixed VDI and compute workloads, and resource sharing across multiple VMs. Additional resources Virtual GPUs 7.4. NVIDIA GPU features for OpenShift Container Platform NVIDIA Container Toolkit NVIDIA Container Toolkit enables you to create and run GPU-accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to use NVIDIA GPUs. NVIDIA AI Enterprise NVIDIA AI Enterprise is an end-to-end, cloud-native suite of AI and data analytics software optimized, certified, and supported with NVIDIA-Certified systems. NVIDIA AI Enterprise includes support for Red Hat OpenShift Container Platform. The following installation methods are supported: OpenShift Container Platform on bare metal or VMware vSphere with GPU Passthrough. OpenShift Container Platform on VMware vSphere with NVIDIA vGPU. GPU Feature Discovery NVIDIA GPU Feature Discovery for Kubernetes is a software component that enables you to automatically generate labels for the GPUs available on a node. GPU Feature Discovery uses node feature discovery (NFD) to perform this labeling. The Node Feature Discovery Operator (NFD) manages the discovery of hardware features and configurations in an OpenShift Container Platform cluster by labeling nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, OS version, and so on. You can find the NFD Operator in the Operator Hub by searching for "Node Feature Discovery". NVIDIA GPU Operator with OpenShift Virtualization Up until this point, the GPU Operator only provisioned worker nodes to run GPU-accelerated containers. Now, the GPU Operator can also be used to provision worker nodes for running GPU-accelerated virtual machines (VMs). You can configure the GPU Operator to deploy different software components to worker nodes depending on which GPU workload is configured to run on those nodes. GPU Monitoring dashboard You can install a monitoring dashboard to display GPU usage information on the cluster Observe page in the OpenShift Container Platform web console. GPU utilization information includes the number of available GPUs, power consumption (in watts), temperature (in degrees Celsius), utilization (in percent), and other metrics for each GPU. Additional resources NVIDIA-Certified Systems NVIDIA AI Enterprise NVIDIA Container Toolkit Enabling the GPU Monitoring Dashboard MIG Support in OpenShift Container Platform Time-slicing NVIDIA GPUs in OpenShift Deploy GPU Operators in a disconnected or airgapped environment Node Feature Discovery Operator | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/architecture/nvidia-gpu-architecture-overview |
Chapter 4. Configuring the Cluster Observability Operator to monitor a service | Chapter 4. Configuring the Cluster Observability Operator to monitor a service You can monitor metrics for a service by configuring monitoring stacks managed by the Cluster Observability Operator (COO). To test monitoring a service, follow these steps: Deploy a sample service that defines a service endpoint. Create a ServiceMonitor object that specifies how the service is to be monitored by the COO. Create a MonitoringStack object to discover the ServiceMonitor object. 4.1. Deploying a sample service for Cluster Observability Operator This configuration deploys a sample service named prometheus-coo-example-app in the user-defined ns1-coo project. The service exposes the custom version metric. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file named prometheus-coo-example-app.yaml that contains the following configuration details for a namespace, deployment, and service: apiVersion: v1 kind: Namespace metadata: name: ns1-coo --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: replicas: 1 selector: matchLabels: app: prometheus-coo-example-app template: metadata: labels: app: prometheus-coo-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-coo-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-coo-example-app type: ClusterIP Save the file. Apply the configuration to the cluster by running the following command: USD oc apply -f prometheus-coo-example-app.yaml Verify that the pod is running by running the following command and observing the output: USD oc -n ns1-coo get pod Example output NAME READY STATUS RESTARTS AGE prometheus-coo-example-app-0927545cb7-anskj 1/1 Running 0 81m 4.2. Specifying how a service is monitored by Cluster Observability Operator To use the metrics exposed by the sample service you created in the "Deploying a sample service for Cluster Observability Operator" section, you must configure monitoring components to scrape metrics from the /metrics endpoint. You can create this configuration by using a ServiceMonitor object that specifies how the service is to be monitored, or a PodMonitor object that specifies how a pod is to be monitored. The ServiceMonitor object requires a Service object. The PodMonitor object does not, which enables the MonitoringStack object to scrape metrics directly from the metrics endpoint exposed by a pod. This procedure shows how to create a ServiceMonitor object for a sample service named prometheus-coo-example-app in the ns1-coo namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. You have installed the Cluster Observability Operator. You have deployed the prometheus-coo-example-app sample service in the ns1-coo namespace. Note The prometheus-coo-example-app sample service does not support TLS authentication. Procedure Create a YAML file named example-coo-app-service-monitor.yaml that contains the following ServiceMonitor object configuration details: apiVersion: monitoring.rhobs/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-coo-example-monitor name: prometheus-coo-example-monitor namespace: ns1-coo spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-coo-example-app This configuration defines a ServiceMonitor object that the MonitoringStack object will reference to scrape the metrics data exposed by the prometheus-coo-example-app sample service. Apply the configuration to the cluster by running the following command: USD oc apply -f example-coo-app-service-monitor.yaml Verify that the ServiceMonitor resource is created by running the following command and observing the output: USD oc -n ns1-coo get servicemonitors.monitoring.rhobs Example output NAME AGE prometheus-coo-example-monitor 81m 4.3. Creating a MonitoringStack object for the Cluster Observability Operator To scrape the metrics data exposed by the target prometheus-coo-example-app service, create a MonitoringStack object that references the ServiceMonitor object you created in the "Specifying how a service is monitored for Cluster Observability Operator" section. This MonitoringStack object can then discover the service and scrape the exposed metrics data from it. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. You have installed the Cluster Observability Operator. You have deployed the prometheus-coo-example-app sample service in the ns1-coo namespace. You have created a ServiceMonitor object named prometheus-coo-example-monitor in the ns1-coo namespace. Procedure Create a YAML file for the MonitoringStack object configuration. For this example, name the file example-coo-monitoring-stack.yaml . Add the following MonitoringStack object configuration details: Example MonitoringStack object apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: example-coo-monitoring-stack namespace: ns1-coo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: k8s-app: prometheus-coo-example-monitor Apply the MonitoringStack object by running the following command: USD oc apply -f example-coo-monitoring-stack.yaml Verify that the MonitoringStack object is available by running the following command and inspecting the output: USD oc -n ns1-coo get monitoringstack Example output NAME AGE example-coo-monitoring-stack 81m Run the following comand to retrieve information about the active targets from Prometheus and filter the output to list only targets labeled with app=prometheus-coo-example-app . This verifies which targets are discovered and actively monitored by Prometheus with this specific label. USD oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/targets' | jq '.data.activeTargets[].discoveredLabels | select(.__meta_kubernetes_endpoints_label_app=="prometheus-coo-example-app")' Example output { "__address__": "10.129.2.25:8080", "__meta_kubernetes_endpoint_address_target_kind": "Pod", "__meta_kubernetes_endpoint_address_target_name": "prometheus-coo-example-app-5d8cd498c7-9j2gj", "__meta_kubernetes_endpoint_node_name": "ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz", "__meta_kubernetes_endpoint_port_name": "web", "__meta_kubernetes_endpoint_port_protocol": "TCP", "__meta_kubernetes_endpoint_ready": "true", "__meta_kubernetes_endpoints_annotation_endpoints_kubernetes_io_last_change_trigger_time": "2024-11-05T11:24:09Z", "__meta_kubernetes_endpoints_annotationpresent_endpoints_kubernetes_io_last_change_trigger_time": "true", "__meta_kubernetes_endpoints_label_app": "prometheus-coo-example-app", "__meta_kubernetes_endpoints_labelpresent_app": "true", "__meta_kubernetes_endpoints_name": "prometheus-coo-example-app", "__meta_kubernetes_namespace": "ns1-coo", "__meta_kubernetes_pod_annotation_k8s_ovn_org_pod_networks": "{\"default\":{\"ip_addresses\":[\"10.129.2.25/23\"],\"mac_address\":\"0a:58:0a:81:02:19\",\"gateway_ips\":[\"10.129.2.1\"],\"routes\":[{\"dest\":\"10.128.0.0/14\",\"nextHop\":\"10.129.2.1\"},{\"dest\":\"172.30.0.0/16\",\"nextHop\":\"10.129.2.1\"},{\"dest\":\"100.64.0.0/16\",\"nextHop\":\"10.129.2.1\"}],\"ip_address\":\"10.129.2.25/23\",\"gateway_ip\":\"10.129.2.1\",\"role\":\"primary\"}}", "__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status": "[{\n \"name\": \"ovn-kubernetes\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.129.2.25\"\n ],\n \"mac\": \"0a:58:0a:81:02:19\",\n \"default\": true,\n \"dns\": {}\n}]", "__meta_kubernetes_pod_annotation_openshift_io_scc": "restricted-v2", "__meta_kubernetes_pod_annotation_seccomp_security_alpha_kubernetes_io_pod": "runtime/default", "__meta_kubernetes_pod_annotationpresent_k8s_ovn_org_pod_networks": "true", "__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status": "true", "__meta_kubernetes_pod_annotationpresent_openshift_io_scc": "true", "__meta_kubernetes_pod_annotationpresent_seccomp_security_alpha_kubernetes_io_pod": "true", "__meta_kubernetes_pod_controller_kind": "ReplicaSet", "__meta_kubernetes_pod_controller_name": "prometheus-coo-example-app-5d8cd498c7", "__meta_kubernetes_pod_host_ip": "10.0.128.2", "__meta_kubernetes_pod_ip": "10.129.2.25", "__meta_kubernetes_pod_label_app": "prometheus-coo-example-app", "__meta_kubernetes_pod_label_pod_template_hash": "5d8cd498c7", "__meta_kubernetes_pod_labelpresent_app": "true", "__meta_kubernetes_pod_labelpresent_pod_template_hash": "true", "__meta_kubernetes_pod_name": "prometheus-coo-example-app-5d8cd498c7-9j2gj", "__meta_kubernetes_pod_node_name": "ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz", "__meta_kubernetes_pod_phase": "Running", "__meta_kubernetes_pod_ready": "true", "__meta_kubernetes_pod_uid": "054c11b6-9a76-4827-a860-47f3a4596871", "__meta_kubernetes_service_label_app": "prometheus-coo-example-app", "__meta_kubernetes_service_labelpresent_app": "true", "__meta_kubernetes_service_name": "prometheus-coo-example-app", "__metrics_path__": "/metrics", "__scheme__": "http", "__scrape_interval__": "30s", "__scrape_timeout__": "10s", "job": "serviceMonitor/ns1-coo/prometheus-coo-example-monitor/0" } Note The above example uses jq command-line JSON processor to format the output for convenience. 4.4. Validating the monitoring stack To validate that the monitoring stack is working correctly, access the example service and then view the gathered metrics. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. You have installed the Cluster Observability Operator. You have deployed the prometheus-coo-example-app sample service in the ns1-coo namespace. You have created a ServiceMonitor object named prometheus-coo-example-monitor in the ns1-coo namespace. You have created a MonitoringStack object named example-coo-monitoring-stack in the ns1-coo namespace. Procedure Create a route to expose the example prometheus-coo-example-app service. From your terminal, run the command: USD oc expose svc prometheus-coo-example-app -n ns1-coo Access the route from your browser, or command line, to generate metrics. Execute a query on the Prometheus pod to return the total HTTP requests metric: USD oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/query?query=http_requests_total' Example output (formatted using jq for convenience) { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "http_requests_total", "code": "200", "endpoint": "web", "instance": "10.129.2.25:8080", "job": "prometheus-coo-example-app", "method": "get", "namespace": "ns1-coo", "pod": "prometheus-coo-example-app-5d8cd498c7-9j2gj", "service": "prometheus-coo-example-app" }, "value": [ 1730807483.632, "3" ] }, { "metric": { "__name__": "http_requests_total", "code": "404", "endpoint": "web", "instance": "10.129.2.25:8080", "job": "prometheus-coo-example-app", "method": "get", "namespace": "ns1-coo", "pod": "prometheus-coo-example-app-5d8cd498c7-9j2gj", "service": "prometheus-coo-example-app" }, "value": [ 1730807483.632, "0" ] } ] } } | [
"apiVersion: v1 kind: Namespace metadata: name: ns1-coo --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: replicas: 1 selector: matchLabels: app: prometheus-coo-example-app template: metadata: labels: app: prometheus-coo-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-coo-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-coo-example-app type: ClusterIP",
"oc apply -f prometheus-coo-example-app.yaml",
"oc -n ns1-coo get pod",
"NAME READY STATUS RESTARTS AGE prometheus-coo-example-app-0927545cb7-anskj 1/1 Running 0 81m",
"apiVersion: monitoring.rhobs/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-coo-example-monitor name: prometheus-coo-example-monitor namespace: ns1-coo spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-coo-example-app",
"oc apply -f example-coo-app-service-monitor.yaml",
"oc -n ns1-coo get servicemonitors.monitoring.rhobs",
"NAME AGE prometheus-coo-example-monitor 81m",
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: example-coo-monitoring-stack namespace: ns1-coo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: k8s-app: prometheus-coo-example-monitor",
"oc apply -f example-coo-monitoring-stack.yaml",
"oc -n ns1-coo get monitoringstack",
"NAME AGE example-coo-monitoring-stack 81m",
"oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/targets' | jq '.data.activeTargets[].discoveredLabels | select(.__meta_kubernetes_endpoints_label_app==\"prometheus-coo-example-app\")'",
"{ \"__address__\": \"10.129.2.25:8080\", \"__meta_kubernetes_endpoint_address_target_kind\": \"Pod\", \"__meta_kubernetes_endpoint_address_target_name\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"__meta_kubernetes_endpoint_node_name\": \"ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz\", \"__meta_kubernetes_endpoint_port_name\": \"web\", \"__meta_kubernetes_endpoint_port_protocol\": \"TCP\", \"__meta_kubernetes_endpoint_ready\": \"true\", \"__meta_kubernetes_endpoints_annotation_endpoints_kubernetes_io_last_change_trigger_time\": \"2024-11-05T11:24:09Z\", \"__meta_kubernetes_endpoints_annotationpresent_endpoints_kubernetes_io_last_change_trigger_time\": \"true\", \"__meta_kubernetes_endpoints_label_app\": \"prometheus-coo-example-app\", \"__meta_kubernetes_endpoints_labelpresent_app\": \"true\", \"__meta_kubernetes_endpoints_name\": \"prometheus-coo-example-app\", \"__meta_kubernetes_namespace\": \"ns1-coo\", \"__meta_kubernetes_pod_annotation_k8s_ovn_org_pod_networks\": \"{\\\"default\\\":{\\\"ip_addresses\\\":[\\\"10.129.2.25/23\\\"],\\\"mac_address\\\":\\\"0a:58:0a:81:02:19\\\",\\\"gateway_ips\\\":[\\\"10.129.2.1\\\"],\\\"routes\\\":[{\\\"dest\\\":\\\"10.128.0.0/14\\\",\\\"nextHop\\\":\\\"10.129.2.1\\\"},{\\\"dest\\\":\\\"172.30.0.0/16\\\",\\\"nextHop\\\":\\\"10.129.2.1\\\"},{\\\"dest\\\":\\\"100.64.0.0/16\\\",\\\"nextHop\\\":\\\"10.129.2.1\\\"}],\\\"ip_address\\\":\\\"10.129.2.25/23\\\",\\\"gateway_ip\\\":\\\"10.129.2.1\\\",\\\"role\\\":\\\"primary\\\"}}\", \"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\": \"[{\\n \\\"name\\\": \\\"ovn-kubernetes\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.129.2.25\\\"\\n ],\\n \\\"mac\\\": \\\"0a:58:0a:81:02:19\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\", \"__meta_kubernetes_pod_annotation_openshift_io_scc\": \"restricted-v2\", \"__meta_kubernetes_pod_annotation_seccomp_security_alpha_kubernetes_io_pod\": \"runtime/default\", \"__meta_kubernetes_pod_annotationpresent_k8s_ovn_org_pod_networks\": \"true\", \"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\": \"true\", \"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\": \"true\", \"__meta_kubernetes_pod_annotationpresent_seccomp_security_alpha_kubernetes_io_pod\": \"true\", \"__meta_kubernetes_pod_controller_kind\": \"ReplicaSet\", \"__meta_kubernetes_pod_controller_name\": \"prometheus-coo-example-app-5d8cd498c7\", \"__meta_kubernetes_pod_host_ip\": \"10.0.128.2\", \"__meta_kubernetes_pod_ip\": \"10.129.2.25\", \"__meta_kubernetes_pod_label_app\": \"prometheus-coo-example-app\", \"__meta_kubernetes_pod_label_pod_template_hash\": \"5d8cd498c7\", \"__meta_kubernetes_pod_labelpresent_app\": \"true\", \"__meta_kubernetes_pod_labelpresent_pod_template_hash\": \"true\", \"__meta_kubernetes_pod_name\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"__meta_kubernetes_pod_node_name\": \"ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz\", \"__meta_kubernetes_pod_phase\": \"Running\", \"__meta_kubernetes_pod_ready\": \"true\", \"__meta_kubernetes_pod_uid\": \"054c11b6-9a76-4827-a860-47f3a4596871\", \"__meta_kubernetes_service_label_app\": \"prometheus-coo-example-app\", \"__meta_kubernetes_service_labelpresent_app\": \"true\", \"__meta_kubernetes_service_name\": \"prometheus-coo-example-app\", \"__metrics_path__\": \"/metrics\", \"__scheme__\": \"http\", \"__scrape_interval__\": \"30s\", \"__scrape_timeout__\": \"10s\", \"job\": \"serviceMonitor/ns1-coo/prometheus-coo-example-monitor/0\" }",
"oc expose svc prometheus-coo-example-app -n ns1-coo",
"oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/query?query=http_requests_total'",
"{ \"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"http_requests_total\", \"code\": \"200\", \"endpoint\": \"web\", \"instance\": \"10.129.2.25:8080\", \"job\": \"prometheus-coo-example-app\", \"method\": \"get\", \"namespace\": \"ns1-coo\", \"pod\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"service\": \"prometheus-coo-example-app\" }, \"value\": [ 1730807483.632, \"3\" ] }, { \"metric\": { \"__name__\": \"http_requests_total\", \"code\": \"404\", \"endpoint\": \"web\", \"instance\": \"10.129.2.25:8080\", \"job\": \"prometheus-coo-example-app\", \"method\": \"get\", \"namespace\": \"ns1-coo\", \"pod\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"service\": \"prometheus-coo-example-app\" }, \"value\": [ 1730807483.632, \"0\" ] } ] } }"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service |
Chapter 3. Configuring the OpenShift CLI | Chapter 3. Configuring the OpenShift CLI Configure OpenShift CLI ( oc ) based on your preferences for working with it. 3.1. Enabling tab completion You can enable tab completion for the Bash or Zsh shells. 3.1.1. Enabling tab completion for Bash After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Bash shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. You must have the package bash-completion installed. Procedure Save the Bash completion code to a file: USD oc completion bash > oc_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp oc_bash_completion /etc/bash_completion.d/ You can also save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 3.1.2. Enabling tab completion for Zsh After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Zsh shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. Procedure To add tab completion for oc to your .zshrc file, run the following command: USD cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF Tab completion is enabled when you open a new terminal. | [
"oc completion bash > oc_bash_completion",
"sudo cp oc_bash_completion /etc/bash_completion.d/",
"cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/cli_tools/cli-configuring-cli |
Chapter 2. Project [project.openshift.io/v1] | Chapter 2. Project [project.openshift.io/v1] Description Projects are the unit of isolation and collaboration in OpenShift. A project has one or more members, a quota on the resources that the project may consume, and the security controls on the resources in the project. Within a project, members may have different roles - project administrators can set membership, editors can create and manage the resources, and viewers can see but not access running containers. In a normal cluster project administrators are not able to alter their quotas - that is restricted to cluster administrators. Listing or watching projects will return only projects the user has the reader role on. An OpenShift project is an alternative representation of a Kubernetes namespace. Projects are exposed as editable to end users while namespaces are not. Direct creation of a project is typically restricted to administrators, while end users should use the requestproject resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ProjectSpec describes the attributes on a Project status object ProjectStatus is information about the current status of a Project 2.1.1. .spec Description ProjectSpec describes the attributes on a Project Type object Property Type Description finalizers array (string) Finalizers is an opaque list of values that must be empty to permanently remove object from storage 2.1.2. .status Description ProjectStatus is information about the current status of a Project Type object Property Type Description conditions array (NamespaceCondition) Represents the latest available observations of the project current state. phase string Phase is the current lifecycle phase of the project Possible enum values: - "Active" means the namespace is available for use in the system - "Terminating" means the namespace is undergoing graceful termination 2.2. API endpoints The following API endpoints are available: /apis/project.openshift.io/v1/projects GET : list or watch objects of kind Project POST : create a Project /apis/project.openshift.io/v1/watch/projects GET : watch individual changes to a list of Project. deprecated: use the 'watch' parameter with a list operation instead. /apis/project.openshift.io/v1/projects/{name} DELETE : delete a Project GET : read the specified Project PATCH : partially update the specified Project PUT : replace the specified Project /apis/project.openshift.io/v1/watch/projects/{name} GET : watch changes to an object of kind Project. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/project.openshift.io/v1/projects HTTP method GET Description list or watch objects of kind Project Table 2.1. HTTP responses HTTP code Reponse body 200 - OK ProjectList schema 401 - Unauthorized Empty HTTP method POST Description create a Project Table 2.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.3. Body parameters Parameter Type Description body Project schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 202 - Accepted Project schema 401 - Unauthorized Empty 2.2.2. /apis/project.openshift.io/v1/watch/projects HTTP method GET Description watch individual changes to a list of Project. deprecated: use the 'watch' parameter with a list operation instead. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/project.openshift.io/v1/projects/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the Project HTTP method DELETE Description delete a Project Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Project Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Project schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Project Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Project Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body Project schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK Project schema 201 - Created Project schema 401 - Unauthorized Empty 2.2.4. /apis/project.openshift.io/v1/watch/projects/{name} Table 2.15. Global path parameters Parameter Type Description name string name of the Project HTTP method GET Description watch changes to an object of kind Project. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.16. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/project_apis/project-project-openshift-io-v1 |
Chapter 4. RoleBindingRestriction [authorization.openshift.io/v1] | Chapter 4. RoleBindingRestriction [authorization.openshift.io/v1] Description RoleBindingRestriction is an object that can be matched against a subject (user, group, or service account) to determine whether rolebindings on that subject are allowed in the namespace to which the RoleBindingRestriction belongs. If any one of those RoleBindingRestriction objects matches a subject, rolebindings on that subject in the namespace are allowed. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec defines the matcher. 4.1.1. .spec Description Spec defines the matcher. Type object Property Type Description grouprestriction `` GroupRestriction matches against group subjects. serviceaccountrestriction `` ServiceAccountRestriction matches against service-account subjects. userrestriction `` UserRestriction matches against user subjects. 4.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/rolebindingrestrictions GET : list objects of kind RoleBindingRestriction /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindingrestrictions DELETE : delete collection of RoleBindingRestriction GET : list objects of kind RoleBindingRestriction POST : create a RoleBindingRestriction /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindingrestrictions/{name} DELETE : delete a RoleBindingRestriction GET : read the specified RoleBindingRestriction PATCH : partially update the specified RoleBindingRestriction PUT : replace the specified RoleBindingRestriction 4.2.1. /apis/authorization.openshift.io/v1/rolebindingrestrictions Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind RoleBindingRestriction Table 4.2. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestrictionList schema 401 - Unauthorized Empty 4.2.2. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindingrestrictions Table 4.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of RoleBindingRestriction Table 4.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind RoleBindingRestriction Table 4.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.8. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestrictionList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBindingRestriction Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.10. Body parameters Parameter Type Description body RoleBindingRestriction schema Table 4.11. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestriction schema 201 - Created RoleBindingRestriction schema 202 - Accepted RoleBindingRestriction schema 401 - Unauthorized Empty 4.2.3. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindingrestrictions/{name} Table 4.12. Global path parameters Parameter Type Description name string name of the RoleBindingRestriction namespace string object name and auth scope, such as for teams and projects Table 4.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a RoleBindingRestriction Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.15. Body parameters Parameter Type Description body DeleteOptions schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBindingRestriction Table 4.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.18. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestriction schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBindingRestriction Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body Patch schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestriction schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBindingRestriction Table 4.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.23. Body parameters Parameter Type Description body RoleBindingRestriction schema Table 4.24. HTTP responses HTTP code Reponse body 200 - OK RoleBindingRestriction schema 201 - Created RoleBindingRestriction schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/role_apis/rolebindingrestriction-authorization-openshift-io-v1 |
Chapter 8. Changes in client components | Chapter 8. Changes in client components This section explains the changes in Eclipse Vert.x clients. 8.1. Changes in Eclipse Vert.x Kafka client The following section describes the changes in Eclipse Vert.x Kafka client. 8.1.1. AdminUtils Class is no longer available The AdminUtils class is no longer available. Use the new KafkaAdminClient class instead to perform administrative operations on a Kafka cluster. 8.1.2. Flush methods use asynchronous handler The flush methods in KafkaProducer class use Handler<AsyncResult<Void>> instead of Handler<Void> . 8.2. Changes in Eclipse Vert.x JDBC client From Eclipse Vert.x 4, the JDBC client supports SQL client. The SQL common module has also been merged in JDBC client, that is, io.vertx:vertx-sql-common merged in io.vertx:vertx-jdbc-client module. You will have to remove the io.vertx:vertx-sql-common dependency file because io.vertx:vertx-jdbc-client will include it. With the merging of SQL common client, all the database APIs have been consolidated into the JDBC client. In Eclipse Vert.x 4, the SQL client has been updated to include the following clients: Reactive PostgreSQL client. In earlier releases, it included a reactive PostgreSQL client. Reactive MySQL client Reactive DB2 client Continues to include reactive PostgreSQL client. This client was available in Eclipse Vert.x 3.x releases as well. Existing JDBC client now includes both the JDBC client API and the SQL client API The reactive implementations use the database network protocols. This makes them resource-efficient. JDBC calls to database are blocking calls. The JDBC client uses worker threads to make these calls non-blocking. The following section describes the changes in Eclipse Vert.x JDBC client. 8.2.1. Creating a pool In Eclipse Vert.x 4, you can create a pool using the JDBC client APIs. In earlier releases, you could create only clients. You could not create pools. The following example shows how to create a client in Eclipse Vert.x 3.x. // 3.x SQLClient client = JDBCClient.create(vertx, jsonConfig); The following example shows how to create a pool in Eclipse Vert.x 4. // 4.x JDBCPool pool = JDBCPool.pool(vertx, jsonConfig); Note Though the Eclipse Vert.x 3.x APIs are supported in Eclipse Vert.x 4, it is recommended that you use the new JDBC client APIs in your applications. A pool enables you to perform simple queries. You do not need to manage connections for simple queries. However, for complex queries or multiple queries, you must manage your connections. The following example shows how to manage connections for queries in Eclipse Vert.x 3.x. // 3.x client.getConnection(res -> { if (res.succeeded()) { SQLConnection connection = res.result(); // Important, do not forget to return the connection connection.close(); } else { // Failed to get connection } }); The following example shows how to manage connections for queries in Eclipse Vert.x 4. // 4.x pool .getConnection() .onFailure(e -> { // Failed to get a connection }) .onSuccess(conn -> { // Important, do not forget to return the connection conn.close(); }); 8.2.2. Support for Typsesafe Config You can use jsonConfig for configurations. However, using the jsonConfig may sometimes result in errors. To avoid these errors, the JDBC client introduces Typesafe Config. The following example shows the basic structure of a Typesafe Config. // 4.x ONLY!!! JDBCPool pool = JDBCPool.pool( vertx, // configure the connection new JDBCConnectOptions() // H2 connection string .setJdbcUrl("jdbc:h2:~/test") // username .setUser("sa") // password .setPassword(""), // configure the pool new PoolOptions() .setMaxSize(16) ); Note To use Typesafe Config, you must include the agroal connection pool in your project. The pool does not expose many configuration options and makes the configuration easy to use. 8.2.3. Running SQL queries This section shows you how to run queries in the JDBC client. 8.2.3.1. Running one shot queries The following example shows how to run queries without managing the connection in Eclipse Vert.x 3.x. // 3.x client.query("SELECT * FROM user WHERE emp_id > ?", new JsonArray().add(1000), res -> { if (res.succeeded()) { ResultSet rs = res2.result(); // You can use these results in your application } }); The following example shows how to run queries without managing the connection in Eclipse Vert.x 4. // 4.x pool .preparedQuery("SELECT * FROM user WHERE emp_id > ?") // the emp id to look up .execute(Tuple.of(1000)) .onSuccess(rows -> { for (Row row : rows) { System.out.println(row.getString("FIRST_NAME")); } }); 8.2.3.2. Running queries on managed connections The following example shows how to run queries on managed connections in Eclipse Vert.x 4. pool .getConnection() .onFailure(e -> { // Failed to get a connection }) .onSuccess(conn -> { conn .query("SELECT * FROM user") .execute() .onFailure(e -> { // Handle the failure // Important, do not forget to return the connection conn.close(); }) .onSuccess(rows -> { for (Row row : rows) { System.out.println(row.getString("FIRST_NAME")); } // Important, do not forget to return the connection conn.close(); }); }); 8.2.4. Support for stored procedures Stored procedures are supported in the JDBC client. The following example shows how to pass IN arguments in Eclipse Vert.x 3.x. // 3.x connection.callWithParams( "{ call new_customer(?, ?) }", new JsonArray().add("John").add("Doe"), null, res -> { if (res.succeeded()) { // Success! } else { // Failed! } }); The following example shows how to pass IN arguments in Eclipse Vert.x 4. // 4.x client .preparedQuery("{call new_customer(?, ?)}") .execute(Tuple.of("Paulo", "Lopes")) .onSuccess(rows -> { ... }); In Eclipse Vert.x 3.x, the support for combining the IN and OUT arguments was very limited due to the available types. In Eclipse Vert.x 4, the pool is type safe and can handle the combination of IN and OUT arguments. You can also use INOUT parameters in your applications. The following example shows handling of arguments in Eclipse Vert.x 3.x. // 3.x connection.callWithParams( "{ call customer_lastname(?, ?) }", new JsonArray().add("John"), new JsonArray().addNull().add("VARCHAR"), res -> { if (res.succeeded()) { ResultSet result = res.result(); } else { // Failed! } }); The following example shows handling of arguments in Eclipse Vert.x 4. // 4.x client .preparedQuery("{call customer_lastname(?, ?)}") .execute(Tuple.of("John", SqlOutParam.OUT(JDBCType.VARCHAR))) .onSuccess(rows -> { ... }); In the JDBC client, the data types have been updated. For an argument of type OUT , you can specify its return type. In the example, the OUT argument is specified as type VARCHAR which is a JDBC constant. The types are not bound by JSON limitations. You can now use database specific types instead of text constants for the type name. 8.3. Changes in Eclipse Vert.x mail client The following section describes the changes in Eclipse Vert.x mail client. 8.3.1. MailAttachment is available as an interface From Eclipse Vert.x 4 onwards, MailAttachment is available as an interface. It enables you to use the mail attachment functionality in a stream. In earlier releases of Eclipse Vert.x, MailAttachment was available as a class and attachment for mails was represented as a data object. 8.3.2. Mail configuration interface extends the net client options MailConfig interface extends the NetClientOptions interface. Due to this extension, mail configuration also supports the proxy setting of the NetClient . 8.4. Changes in Eclipse Vert.x AMQP client The following section describes the changes in Eclipse Vert.x AMQP client. 8.4.1. Removed methods in AMQP client that contain AmqpMessage argument The AMQP client methods that had Handler<AmqpMessage> as an argument have been removed. In earlier releases, you could set this handler on ReadStream<AmqpMessage> . However, if you migrate your applications to use futures, such methods cannot be used. Removed methods Replacing methods AmqpClient.createReceiver(String address, Handler<AmqpMessage> messageHandler, ... ) AmqpClient createReceiver(String address, Handler<AsyncResult<AmqpReceiver>> completionHandler) AmqpConnection createReceiver(... , Handler<AsyncResult<AmqpReceiver>> completionHandler) AmqpConnection createReceiver(String address, Handler<AsyncResult<AmqpReceiver>> completionHandler) AmqpConnection createReceiver(.., Handler<AmqpMessage> messageHandler, Handler<AsyncResult<AmqpReceiver>> completionHandler) AmqpConnection createReceiver(String address, Handler<AsyncResult<AmqpReceiver>> completionHandler) 8.5. Changes in Eclipse Vert.x MongoDB client The following section describes the changes in Eclipse Vert.x MongoDB client. 8.5.1. Methods removed from MongoDB client The following methods have been removed from MongoClient class. Removed methods Replacing methods MongoClient.update() MongoClient.updateCollection() MongoClient.updateWithOptions() MongoClient.updateCollectionWithOptions() MongoClient.replace() MongoClient.replaceDocuments() MongoClient.replaceWithOptions() MongoClient.replaceDocumentsWithOptions() MongoClient.remove() MongoClient.removeDocuments() MongoClient.removeWithOptions() MongoClient.removeDocumentsWithOptions() MongoClient.removeOne() MongoClient.removeDocument() MongoClient.removeOneWithOptions() MongoClient.removeDocumentsWithOptions() 8.6. Changes in EventBus JavaScript client In Eclipse Vert.x 4, the EventBus JavaScript client module is available in a new location. You will have to update your build systems to use the module from the new location. In Eclipse Vert.x 3.x, the event bus JavaScript client was available in various locations, for example: Maven Central NPM Bower.io CDNJS webjars In Eclipse Vert.x 4, the JavaScript client is available only in npm . The EventBus JavaScript client module can be accessed from the following locations: CDN npm packages Use the following code in your build scripts to access the module. JSON scripts XML scripts <dependency> <groupId>org.webjars.npm</groupId> <artifactId>vertx__eventbus-bridge-client.js</artifactId> <version>1.0.0-1</version> </dependency> 8.6.1. Versioning of JavaScript client Before Eclipse Vert.x 4, every Eclipse Vert.x release included a new release of the JavaScript client. However, from Eclipse Vert.x 4 onward, a new version of JavaScript client will be available in npm only if there changes in the client. You do not need to update your client application for every Eclipse Vert.x release, unless there is a version change. 8.7. Changes in Eclipse Vert.x Redis client In Eclipse Vert.x 4, use the Redis class to work with Redis client. The class RedisClient is no longer available. NOTE To help you migrate your applications from RedisClient to Redis class, a helper class RedisAPI is available. RedisAPI enables you to replicate the functionality similar to RedisClient class. The new class contains all the enhancements in protocols and Redis server features. Use the new class to: Work with all Redis commands Connect to single servers Connect to high availability servers where Redis Sentinel is enabled Connect to cluster configurations of Redis Execute requests in Redis extensions Communicate with both RESP2 and RESP3 server protocol servers 8.7.1. Migrating existing Redis client applications to new client You can migrate your existing applications to new Redis client directly or use the helper class RedisAPI to migrate your applications in two steps. Before migrating the applications you must create the client. 8.7.1.1. Creating the client The following example shows how a Redis client was created in Eclipse Vert.x 3.x releases. // Create the redis client (3.x) RedisClient client = RedisClient .create(vertx, new RedisOptions().setHost(host)); The following example shows how to create a Redis client in Eclipse Vert.x 4. // Create the redis client (4.x) Redis client = Redis .createClient( vertx, "redis://server.address:port"); In Eclipse Vert.x 4, the client uses the following standard connection string syntax: redis[s]://[[user]:password@]server[:port]/[database] 8.7.1.2. Migrating applications to RedisAPI Using the 'RedisAPI` you can now decide how to manage the connection: You can let the client manage the connection for you using a pool. Or You can control the connection by requesting a new connection. You must ensure to close or return the connection when done. You must create the client and then update the applications to handle requests. The following example shows how to handle requests after creating the client in Eclipse Vert.x 3.x releases. // Using 3.x // omitting the error handling for brevity client.set("key", "value", s -> { if (s.succeeded()) { System.out.println("key stored"); client.get("key", g -> { if (s.succeeded()) { System.out.println("Retrieved value: " + s.result()); } }); } }); The following example shows how to handle requests after creating the client in Eclipse Vert.x 4. The example uses a list for setting the key-value pairs instead of hard coding options. See Redis SET command for more information on arguments available for the command. // Using 4.x // omitting the error handling for brevity // 1. Wrap the client into a RedisAPI api = RedisAPI.api(client); // 2. Use the typed API api.set( Arrays.asList("key", "value"), s -> { if (s.succeeded()) { System.out.println("key stored"); client.get("key", g -> { if (s.succeeded()) { System.out.println("Retrieved value: " + s.result()); } }); } }); 8.7.1.3. Migrating applications directly to Redis client When you migrate to the new Redis client directly: You can use all the new Redis commands. You can use extensions. You may reduce a few conversions from helper class to new client, which might improve the performance of your application. You must create the client and then update the applications to handle requests. The following example shows how to set and get requests after creating the client in Eclipse Vert.x 3.x releases. // Using 3.x // omitting the error handling for brevity client.set("key", "value", s -> { if (s.succeeded()) { System.out.println("key stored"); client.get("key", g -> { if (s.succeeded()) { System.out.println("Retrieved value: " + s.result()); } }); } }); The following example shows how to handle requests after creating the client in Eclipse Vert.x 4. // Using 4.x // omitting the error handling for brevity import static io.vertx.redis.client.Request.cmd; import static io.vertx.redis.client.Command.*; client.send(cmd(SET).arg("key").arg("value"), s -> { if (s.succeeded()) { System.out.println("key stored"); client.send(cmd(GET).arg("key"), g -> { if (s.succeeded()) { System.out.println("Retrieved value: " + s.result()); } }); } }); In Eclipse Vert.x 4, all the interactions use the send(Request) method. 8.7.1.4. Migrating responses In Eclipse Vert.x 3.x, the client used to hardcode all known commands till Redis 5, and the responses were also typed according to the command. In the new client, the commands are not hardcoded. The responses are of the type Response . The new wire protocol has more range of types. In older client, a response would be of following types: null Long String JsonArray JsonObject (For INFO and HMGET array responses) In the new client, the response is of following types: null Response The Response object has type converters. For example, converters such as: toString() toInteger() toBoolean() toBuffer() If the received data is not of the requested type, then the type converters convert it to the closet possible data type. When the conversion to a particular type is not possible, the UnsupportedOperationException is thrown. For example, conversion from String to List or Map is not possible. You can also handle collections, because the Response object implements the Iterable interface. The following example shows how to perform a MGET request. // Using 4.x // omitting the error handling for brevity import static io.vertx.redis.client.Request.cmd; import static io.vertx.redis.client.Command.*; client.send(cmd(MGET).arg("key1").arg("key2").arg("key3"), mget -> { mget.result() .forEach(value -> { // Use the single value 8.7.2. Updates in Eclipse Vert.x Redis client This section describes changes in Redis client. 8.7.2.1. Removed deprecated term "slave" from Redis roles and node options The deprecated term "slave" has been replaced with "replica" in Redis roles and node options. Roles The following example shows the usage of SLAVE role in Eclipse Vert.x 3.x releases. The following example shows the usage of REPLICA role in Eclipse Vert.x 4. Node options The following example shows you usage of node type RedisSlaves in Eclipse Vert.x 3.x releases. The following example shows you usage of node type RedisReplicas in Eclipse Vert.x 4. | [
"// 3.x SQLClient client = JDBCClient.create(vertx, jsonConfig);",
"// 4.x JDBCPool pool = JDBCPool.pool(vertx, jsonConfig);",
"// 3.x client.getConnection(res -> { if (res.succeeded()) { SQLConnection connection = res.result(); // Important, do not forget to return the connection connection.close(); } else { // Failed to get connection } });",
"// 4.x pool .getConnection() .onFailure(e -> { // Failed to get a connection }) .onSuccess(conn -> { // Important, do not forget to return the connection conn.close(); });",
"// 4.x ONLY!!! JDBCPool pool = JDBCPool.pool( vertx, // configure the connection new JDBCConnectOptions() // H2 connection string .setJdbcUrl(\"jdbc:h2:~/test\") // username .setUser(\"sa\") // password .setPassword(\"\"), // configure the pool new PoolOptions() .setMaxSize(16) );",
"// 3.x client.query(\"SELECT * FROM user WHERE emp_id > ?\", new JsonArray().add(1000), res -> { if (res.succeeded()) { ResultSet rs = res2.result(); // You can use these results in your application } });",
"// 4.x pool .preparedQuery(\"SELECT * FROM user WHERE emp_id > ?\") // the emp id to look up .execute(Tuple.of(1000)) .onSuccess(rows -> { for (Row row : rows) { System.out.println(row.getString(\"FIRST_NAME\")); } });",
"pool .getConnection() .onFailure(e -> { // Failed to get a connection }) .onSuccess(conn -> { conn .query(\"SELECT * FROM user\") .execute() .onFailure(e -> { // Handle the failure // Important, do not forget to return the connection conn.close(); }) .onSuccess(rows -> { for (Row row : rows) { System.out.println(row.getString(\"FIRST_NAME\")); } // Important, do not forget to return the connection conn.close(); }); });",
"// 3.x connection.callWithParams( \"{ call new_customer(?, ?) }\", new JsonArray().add(\"John\").add(\"Doe\"), null, res -> { if (res.succeeded()) { // Success! } else { // Failed! } });",
"// 4.x client .preparedQuery(\"{call new_customer(?, ?)}\") .execute(Tuple.of(\"Paulo\", \"Lopes\")) .onSuccess(rows -> { });",
"// 3.x connection.callWithParams( \"{ call customer_lastname(?, ?) }\", new JsonArray().add(\"John\"), new JsonArray().addNull().add(\"VARCHAR\"), res -> { if (res.succeeded()) { ResultSet result = res.result(); } else { // Failed! } });",
"// 4.x client .preparedQuery(\"{call customer_lastname(?, ?)}\") .execute(Tuple.of(\"John\", SqlOutParam.OUT(JDBCType.VARCHAR))) .onSuccess(rows -> { });",
"{ \"devDependencies\": { \"@vertx/eventbus-bridge-client.js\": \"1.0.0-1\" } }",
"<dependency> <groupId>org.webjars.npm</groupId> <artifactId>vertx__eventbus-bridge-client.js</artifactId> <version>1.0.0-1</version> </dependency>",
"// Create the redis client (3.x) RedisClient client = RedisClient .create(vertx, new RedisOptions().setHost(host));",
"// Create the redis client (4.x) Redis client = Redis .createClient( vertx, \"redis://server.address:port\");",
"redis[s]://[[user]:password@]server[:port]/[database]",
"// Using 3.x // omitting the error handling for brevity client.set(\"key\", \"value\", s -> { if (s.succeeded()) { System.out.println(\"key stored\"); client.get(\"key\", g -> { if (s.succeeded()) { System.out.println(\"Retrieved value: \" + s.result()); } }); } });",
"// Using 4.x // omitting the error handling for brevity // 1. Wrap the client into a RedisAPI api = RedisAPI.api(client); // 2. Use the typed API api.set( Arrays.asList(\"key\", \"value\"), s -> { if (s.succeeded()) { System.out.println(\"key stored\"); client.get(\"key\", g -> { if (s.succeeded()) { System.out.println(\"Retrieved value: \" + s.result()); } }); } });",
"// Using 3.x // omitting the error handling for brevity client.set(\"key\", \"value\", s -> { if (s.succeeded()) { System.out.println(\"key stored\"); client.get(\"key\", g -> { if (s.succeeded()) { System.out.println(\"Retrieved value: \" + s.result()); } }); } });",
"// Using 4.x // omitting the error handling for brevity import static io.vertx.redis.client.Request.cmd; import static io.vertx.redis.client.Command.*; client.send(cmd(SET).arg(\"key\").arg(\"value\"), s -> { if (s.succeeded()) { System.out.println(\"key stored\"); client.send(cmd(GET).arg(\"key\"), g -> { if (s.succeeded()) { System.out.println(\"Retrieved value: \" + s.result()); } }); } });",
"// Using 4.x // omitting the error handling for brevity import static io.vertx.redis.client.Request.cmd; import static io.vertx.redis.client.Command.*; client.send(cmd(MGET).arg(\"key1\").arg(\"key2\").arg(\"key3\"), mget -> { mget.result() .forEach(value -> { // Use the single value",
"// Before (3.x) Redis.createClient( rule.vertx(), new RedisOptions() .setType(RedisClientType.SENTINEL) .addConnectionString(\"redis://localhost:5000\") .setMasterName(\"sentinel7000\") .setRole(RedisRole.SLAVE));",
"// After (4.x) Redis.createClient( rule.vertx(), new RedisOptions() .setType(RedisClientType.SENTINEL) .addConnectionString(\"redis://localhost:5000\") .setMasterName(\"sentinel7000\") .setRole(RedisRole.REPLICA));",
"// Before (3.9) options.setUseSlaves(RedisSlaves);",
"// After (4.x) options.setUseReplicas(RedisReplicas);"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_4.3_migration_guide/changes-in-client-components_vertx |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/rhdg-docs_datagrid |
Chapter 38. NetworkManager connection profiles in keyfile format | Chapter 38. NetworkManager connection profiles in keyfile format By default, NetworkManager stores connection profiles in ifcfg format, but you can also use profiles in keyfile format. Unlike the deprecated ifcfg format, the keyfile format supports all connection settings that NetworkManager provides. In the Red Hat Enterprise Linux 9, the keyfile format will be the default. 38.1. The keyfile format of NetworkManager profiles The keyfile format is similar to the INI format. For example, the following is an Ethernet connection profile in keyfile format: Warning Typos or incorrect placements of parameters can lead to unexpected behavior. Therefore, do not manually edit or create NetworkManager profiles. Use the nmcli utility, the network RHEL system role, or the nmstate API to manage NetworkManager connections. For example, you can use the nmcli utility in offline mode to create connection profiles. Each section corresponds to a NetworkManager setting name as described in the nm-settings(5) man page. Each key-value-pair in a section is one of the properties listed in the settings specification of the man page. Most variables in NetworkManager keyfiles have a one-to-one mapping. This means that a NetworkManager property is stored in the keyfile as a variable of the same name and in the same format. However, there are exceptions, mainly to make the keyfile syntax easier to read. For a list of these exceptions, see the nm-settings-keyfile(5) man page on your system. Important For security reasons, because connection profiles can contain sensitive information, such as private keys and passphrases, NetworkManager uses only configuration files owned by the root user and that are only readable and writable by root . Save the connection profile with a .nmconnection suffix in the /etc/NetworkManager/system-connections/ directory. This directrory contains persistent profiles. If you modify a persistent profile by using the NetworkManager API, NetworkManager writes and overwrites files in this directory. NetworkManager does not automatically reload profiles from disk. When you create or update a connection profile in keyfile format, use the nmcli connection reload command to inform NetworkManager about the changes. 38.2. Using nmcli to create keyfile connection profiles in offline mode Use NetworkManager utilities, such as nmcli , the network RHEL system role, or the nmstate API to manage NetworkManager connections, to create and update configuration files. However, you can also create various connection profiles in the keyfile format in offline mode by using the nmcli --offline connection add command. The offline mode ensures that nmcli operates without the NetworkManager service to produce keyfile connection profiles through standard output. This feature can be useful in the following scenarios: You want to create your connection profiles that need to be pre-deployed somewhere. For example in a container image, or as an RPM package. You want to create your connection profiles in an environment where the NetworkManager service is not available, for example, when you want to use the chroot utility. Alternatively, when you want to create or modify the network configuration of the RHEL system to be installed through the Kickstart %post script. Procedure Create a new connection profile in the keyfile format. For example, for a connection profile of an Ethernet device that does not use DHCP, run a similar nmcli command: Note The connection name you specified with the con-name key is saved into the id variable of the generated profile. When you use the nmcli command to manage this connection later, specify the connection as follows: When the id variable is not omitted, use the connection name, for example Example-Connection . When the id variable is omitted, use the file name without the .nmconnection suffix, for example output . Set permissions to the configuration file so that only the root user can read and update it: Start the NetworkManager service: If you set the autoconnect variable in the profile to false , activate the connection: Verification Verify that the NetworkManager service is running: Verify that NetworkManager can read the profile from the configuration file: If the output does not show the newly created connection, verify that the keyfile permissions and the syntax you used are correct. Display the connection profile: Additional resources nmcli(1) , nm-settings(5) , and nm-settings-keyfile(5) man pages on your system 38.3. Manually creating a NetworkManager profile in keyfile format You can manually create a NetworkManager connection profile in keyfile format. Warning Manually creating or updating the configuration files can result in an unexpected or non-functional network configuration. As an alternative, you can use nmcli in offline mode. See Using nmcli to create keyfile connection profiles in offline mode Procedure Create a connection profile. For example, for a connection profile for the enp1s0 Ethernet device that uses DHCP, create the /etc/NetworkManager/system-connections/example.nmconnection file with the following content: Note You can use any file name with a .nmconnection suffix. However, when you later use nmcli commands to manage the connection, you must use the connection name set in the id variable when you refer to this connection. When you omit the id variable, use the file name without the .nmconnection to refer to this connection. Set permissions on the configuration file so that only the root user can read and update it: Reload the connection profiles: Verify that NetworkManager read the profile from the configuration file: If the command does not show the newly added connection, verify that the file permissions and the syntax you used in the file are correct. If you set the autoconnect variable in the profile to false , activate the connection: Verification Display the connection profile: Additional resources nm-settings(5) and nm-settings-keyfile(5) man pages on your system 38.4. The differences in interface renaming with profiles in ifcfg and keyfile format You can define custom network interface names, such as provider or lan to make interface names more descriptive. In this case, the udev service renames the interfaces. The renaming process works differently depending on whether you use connection profiles in ifcfg or keyfile format. The interface renaming process when using a profile in ifcfg format The /usr/lib/udev/rules.d/60-net.rules udev rule calls the /lib/udev/rename_device helper utility. The helper utility searches for the HWADDR parameter in /etc/sysconfig/network-scripts/ifcfg-* files. If the value set in the variable matches the MAC address of an interface, the helper utility renames the interface to the name set in the DEVICE parameter of the file. The interface renaming process when using a profile in keyfile format Create a systemd link file or a udev rule to rename an interface. Use the custom interface name in the interface-name property of a NetworkManager connection profile. Additional resources How the udev device manager renames network interfaces Configuring user-defined network interface names by using udev rules Configuring user-defined network interface names by using systemd link files 38.5. Migrating NetworkManager profiles from ifcfg to keyfile format If you use connection profiles in ifcfg format, you can convert them to the keyfile format to have all profiles in the preferred format and in one location. Note If an ifcfg file contains the NM_CONTROLLED=no setting, NetworkManager does not control this profile and, consequently the migration process ignores it. Prerequisites You have connection profiles in ifcfg format in the /etc/sysconfig/network-scripts/ directory. If the connection profiles contain a DEVICE variable that is set to a custom device name, such as provider or lan , you created a systemd link file or a udev rule for each of the custom device names. Procedure Migrate the connection profiles: Verification Optionally, you can verify that you successfully migrated all your connection profiles: Additional resources nm-settings-keyfile(5) nm-settings-ifcfg-rh(5) How the udev device manager renames network interfaces | [
"[connection] id= example_connection uuid= 82c6272d-1ff7-4d56-9c7c-0eb27c300029 type= ethernet autoconnect= true [ipv4] method= auto [ipv6] method= auto [ethernet] mac-address= 00:53:00:8f:fa:66",
"nmcli --offline connection add type ethernet con-name Example-Connection ipv4.addresses 192.0.2.1/24 ipv4.dns 192.0.2.200 ipv4.method manual > /etc/NetworkManager/system-connections/example.nmconnection",
"chmod 600 /etc/NetworkManager/system-connections/example.nmconnection chown root:root /etc/NetworkManager/system-connections/example.nmconnection",
"systemctl start NetworkManager.service",
"nmcli connection up Example-Connection",
"systemctl status NetworkManager.service ● NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2022-08-03 13:08:32 CEST; 1min 40s ago",
"nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME ethernet /etc/NetworkManager/system-connections/examaple.nmconnection Example-Connection ethernet /etc/sysconfig/network-scripts/ifcfg-enp1s0 enp1s0",
"nmcli connection show Example-Connection connection.id: Example-Connection connection.uuid: 232290ce-5225-422a-9228-cb83b22056b4 connection.stable-id: -- connection.type: 802-3-ethernet connection.interface-name: -- connection.autoconnect: yes",
"[connection] id=Example-Connection type=ethernet autoconnect=true interface-name=enp1s0 [ipv4] method=auto [ipv6] method=auto",
"chown root:root /etc/NetworkManager/system-connections/example.nmconnection chmod 600 /etc/NetworkManager/system-connections/example.nmconnection",
"nmcli connection reload",
"nmcli -f NAME,UUID,FILENAME connection NAME UUID FILENAME Example-Connection 86da2486-068d-4d05-9ac7-957ec118afba /etc/NetworkManager/system-connections/example.nmconnection",
"nmcli connection up example_connection",
"nmcli connection show example_connection",
"nmcli connection migrate Connection 'enp1s0' (43ed18ab-f0c4-4934-af3d-2b3333948e45) successfully migrated. Connection 'enp2s0' (883333e8-1b87-4947-8ceb-1f8812a80a9b) successfully migrated.",
"nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME ethernet /etc/NetworkManager/system-connections/enp1s0.nmconnection enp1s0 ethernet /etc/NetworkManager/system-connections/enp2s0.nmconnection enp2s0"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_networkmanager-connection-profiles-in-keyfile-format_configuring-and-managing-networking |
function::cputime_to_usecs | function::cputime_to_usecs Name function::cputime_to_usecs - Translates the given cputime into microseconds Synopsis Arguments cputime Time to convert to microseconds. | [
"cputime_to_usecs:long(cputime:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-cputime-to-usecs |
Chapter 4. Configuration options for JDK Flight Recorder | Chapter 4. Configuration options for JDK Flight Recorder You can configure JDK Flight Recorder (JFR) to capture various sets of events using the command line or diagnostic commands. 4.1. Configure JDK Flight Recorder using the command line You can configure JDK Flight Recorder (JFR) from the command line using the following options: 4.1.1. Start JFR Use -XX:StartFlightRecording option to start a JFR recording for the Java application. For example: You can set the following parameter=value entries when starting a JFR recording: delay=time Use this parameter to specify the delay between the Java application launch time and the start of the recording. Append s to specify the time in seconds, m for minutes, h for hours, or d for days. For example, specifying 10m means 10 minutes. By default, there is no delay, and this parameter is set to 0. disk={true|false} Use this parameter to specify whether to write data to disk while recording. By default, this parameter is true . dumponexit={true|false} Use this parameter to specify if the running recording is dumped when the JVM shuts down. If the parameter is enabled and a file name is not set, the recording is written to a file in the directory where the recording progress has started. The file name is a system-generated name that contains the process ID, recording ID, and current timestamp. For example, hotspot-pid-47496-id-1-2018_01_25_19_10_41.jfr. By default, this parameter is false . duration=time Use this parameter to specify the duration of the recording. Append s to specify the time in seconds, m for minutes, h for hours, or d for days. For example, if you specify duration as 5h, it indicates 5 hours. By default, this parameter is set to 0, which means there is no limit set on the recording duration. filename=path Use this parameter to specify the path and name of the recording file. The recording is written to this file when stopped. For example: · recording.jfr · /home/user/recordings/recording.jfr name=identifier Use this parameter to specify both the name and the identifier of a recording. maxage=time Use this parameter to specify the maximum number of days the recording should be available on the disk. This parameter is valid only when the disk parameter is set to true. Append s to specify the time in seconds, m for minutes, h for hours, or d for days. For example, when you specify 30s, it indicates 30 seconds. By default, this parameter is set to 0, which means there is no limit set. maxsize=size Use this parameter to specify the maximum size of disk data to keep for the recording. This parameter is valid only when the disk parameter is set to true. The value must not be less than the value for the maxchunksize parameter set with -XX:FlightRecorderOptions . Append m or M to specify the size in megabytes, or g or G to specify the size in gigabytes. By default, the maximum size of disk data isn't limited, and this parameter is set to 0. path-to-gc-roots={true|false} Use this parameter to specify whether to collect the path to garbage collection (GC) roots at the end of a recording. By default, this parameter is set to false. The path to GC roots is useful for finding memory leaks. For Red Hat build of OpenJDK 17, you can enable the OldObjectSample event which is a more efficient alternative than using heap dumps. You can also use the OldObjectSample event in production. Collecting memory leak information is time-consuming and incurs extra overhead. You should enable this parameter only when you start recording an application that you suspect has memory leaks. If the JFR profile parameter is set to profile, you can trace the stack from where the object is leaking. It is included in the information collected. settings=path Use this parameter to specify the path and name of the event settings file (of type JFC). By default, the default.jfc file is used, which is located in JAVA_HOME/lib/jfr. This default settings file collects a predefined set of information with low overhead, so it has minimal impact on performance and can be used with recordings that run continuously. The second settings file is also provided, profile.jfc, which provides more data than the default configuration, but can have more overhead and impact performance. Use this configuration for short periods of time when more information is needed. Note You can specify values for multiple parameters by separating them with a comma. For example, -XX:StartFlightRecording=disk=false , name=example-recording . 4.1.2. Control behavior of JFR Use -XX:FlightRecorderOptions option to sets the parameters that control the behavior of JFR. For example: You can set the following parameter=value entries to control the behavior of JFR: globalbuffersize=size Use this parameter to specify the total amount of primary memory used for data retention. The default value is based on the value specified for memorysize . You can change the memorysize parameter to alter the size of global buffers. maxchunksize=size Use this parameter to specify the maximum size of the data chunks in a recording. Append m or M to specify the size in megabytes (MB), or g or G to specify the size in gigabytes (GB). By default, the maximum size of data chunks is set to 12 MB. The minimum size allowed is 1 MB. memorysize=size Use this parameter to determine how much buffer memory should be used. The parameter sets the globalbuffersize and numglobalbuffers parameters based on the size specified. Append m or M to specify the size in megabytes (MB), or g or G to specify the size in gigabytes (GB). By default, the memory size is set to 10 MB. numglobalbuffers=number Use this parameter to specify the number of global buffers used. The default value is based on the size specified in the memorysize parameter. You can change the memorysize parameter to alter the number of global buffers. old-object-queue-size=number-of-objects Use this parameter to track the maximum number of old objects. By default, the number of objects is set to 256. repository=path Use this parameter to specify the repository for temporary disk storage. By default, it uses system temporary directory. retransform={true|false} Use this parameter to specify if event classes should be retransformed using JVMTI. If set to false , instrumentation is added to loaded event classes. By default, this parameter is set to true for enabling class retransformation. samplethreads={true|false} Use this parameter to specify whether thread sampling is enabled. Thread sampling only occurs when the sampling event is enabled and this parameter is set to true . By default, this parameter is set to true . stackdepth=depth Use this parameter to set the stack depth for stack traces. By default, the stack depth is set to 64 method calls. You can set the maximum stack depth to 2048. Values greater than 64 could create significant overhead and reduce performance. threadbuffersize=size Use this parameter to specify the local buffer size for a thread. By default, the local buffer size is set to 8 kilobytes, with a minimum value of 4 kilobytes. Overriding this parameter could reduce performance and is not recommended. Note You can specify values for multiple parameters by separating them with a comma. 4.2. Configuring JDK Flight Recorder using diagnostic command (JCMD) You can configure JDK Flight Recorder (JFR) using Java diagnostic command. The simplest way to execute a diagnostic command is to use the jcmd tool which is located in the Java installation directory. To use a command, you have to pass the process identifier of the JVM or the name of the main class, and the actual command as arguments to jcmd . You can retrieve the JVM or the name of the main class by running jcmd without arguments or by using jps . The jps (Java Process Status) tool lists JVMs on a target system to which it has access permissions. To see a list of all running Java processes, use the jcmd command without any arguments. To see a complete list of commands available for a running Java application, specify help as the diagnostic command after the process identifier or the name of the main class. Use the following diagnostic commands for JFR: 4.2.1. Start JFR Use JFR.start diagnostic command to start a flight recording. For example: Table 4.1. The following table lists the parameters you can use with this command: Parameter Description Data type Default value name Name of the recording String - settings Server-side template String - duration Duration of recording Time 0s filename Resulting recording file name String - maxage Maximum age of buffer data Time 0s maxsize Maximum size of buffers in bytes Long 0 dumponexit Dump running recording when JVM shuts down Boolean - path-to-gc-roots Collect path to garbage collector roots Boolean False 4.2.2. Stop JFR Use JFR.stop diagnostic command to stop running flight recordings. For example: Table 4.2. The following table lists the parameters you can use with this command. Parameter Description Data type Default value name Name of the recording String - filename Copy recording data to the file String - 4.2.3. Check JFR Use JFR.check command to show information about the recordings which are in progress. For example: Table 4.3. The following table lists the parameters you can use with this command. Parameter Description Data type Default value name Name of the recording String - filename Copy recording data to the file String - maxage Maximum duration to dump file Time 0s maxsize Maximum amount of bytes to dump Long 0 begin Starting time to dump data String - end Ending time to dump data String - path-to-gc-roots Collect path to garbage collector roots Boolean false 4.2.4. Dump JFR Use JFR.dump diagnostic command to copy the content of a flight recording to a file. For example: Table 4.4. The following table lists the parameters you can use with this command. Parameter Description Data type Default value name Name of the recording String - filename Copy recording data to the file String - maxage Maximum duration to dump file Time 0s maxsize Maximum amount of bytes to dump Long 0 begin Starting time to dump data String - end Ending time to dump data String - path-to-gc-roots Collect path to garbage collector roots Boolean false 4.2.5. Configure JFR Use JFR.configure diagnostic command to configure the flight recordings. For example: Table 4.5. The following table lists the parameters you can use with this command. Parameter Description Data type Default value repositorypath Path to repository String - dumppath Path to dump String - stackdepth Stack depth Jlong 64 globalbuffercount Number of global buffers Jlong 32 globalbuffersize Size of a global buffer Jlong 524288 thread_buffer_size Size of a thread buffer Jlong 8192 memorysize Overall memory size Jlong 16777216 maxchunksize Size of an individual disk chunk Jlong 12582912 Samplethreads Activate thread sampling Boolean true Revised on 2024-05-03 15:34:54 UTC | [
"java -XX:StartFlightRecording=delay=5s,disk=false,dumponexit=true,duration=60s,filename=myrecording.jfr <<YOUR_JAVA_APPLICATION>>",
"java -XX:FlightRecorderOptions=duration=60s,filename=myrecording.jfr -XX:FlightRecorderOptions=stackdepth=128,maxchunksize=2M <<YOUR_JAVA_APPLICATION>>",
"jcmd <PID> JFR.start delay=10s duration=10m filename=recording.jfr",
"jcmd <PID> JFR.stop name=output_file",
"jcmd <PID> JFR.check",
"jcmd <PID> JFR.dump name=output_file filename=output.jfr",
"jcmd <PID> JFR.configure repositorypath=/home/jfr/recordings"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/configure-jfr-options |
Chapter 12. Security | Chapter 12. Security GSSAPI key-exchange algorithms can now be selectively disabled In view of the Logjam security vulnerability, the gss-group1-sha1-* key-exchange methods are no longer considered secure. While there was the possibility to disable this key-exchange method as a normal key exchange, it was not possible to disable it as a GSSAPI key exchange. With this update, the administrator can selectively disable this or other algorithms used by the GSSAPI key exchange. SELinux policy for Red Hat Gluster Storage has been added Previously, SELinux policy for Red Hat Gluster Storage (RHGS) components was missing, and Gluster worked correctly only when SELinux was in permissive mode. With this update, SELinux policy rules for the glusterd (glusterFS Management Service), glusterfsd (NFS sever), smbd , nfsd , rpcd , adn ctdbd processes have been updated providing SELinux support for Gluster. openscap rebase to version 1.2.5 The openscap packages have been upgraded to upstream version 1.2.5, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: * Support for OVAL version 5.11, which brings multiple improvements such as for systemd properties * Introduced native support of xml.bz2 input files * Introduced the oscap-ssh tool for assessing remote systems * Introduced the oscap-docker tool for assessing containers/images scap-security-guide rebase to version 0.1.25 The scap-security-guide tool has been upgraded to upstream version 0.1.25, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: * New security profiles for Red Hat Enterprise Linux 7 Server: Common Profile for General-Purpose Systems, Draft PCI-DSS v3 Control Baseline, Standard System Security Profile, and Draft STIG for Red Hat Enterprise Linux 7 Server. * New security benchmarks for Firefox and Java Runtime Environment (JRE) components running on Red Hat Enterprise Linux 6 and 7. * New scap-security-guide-doc subpackage, which contains HTML-formatted documents containing security guides generated from XCCDF benchmarks (for every security profile shipped in security benchmarks for Red Hat Enterprise Linux 6 and 7, Firefox, and JRE). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/security |
Chapter 4. EgressFirewall [k8s.ovn.org/v1] | Chapter 4. EgressFirewall [k8s.ovn.org/v1] Description EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of EgressFirewall. status object Observed status of EgressFirewall 4.1.1. .spec Description Specification of the desired behavior of EgressFirewall. Type object Required egress Property Type Description egress array a collection of egress firewall rule objects egress[] object EgressFirewallRule is a single egressfirewall rule object 4.1.2. .spec.egress Description a collection of egress firewall rule objects Type array 4.1.3. .spec.egress[] Description EgressFirewallRule is a single egressfirewall rule object Type object Required to type Property Type Description ports array ports specify what ports and protocols the rule applies to ports[] object EgressFirewallPort specifies the port to allow or deny traffic to to object to is the target that traffic is allowed/denied to type string type marks this as an "Allow" or "Deny" rule 4.1.4. .spec.egress[].ports Description ports specify what ports and protocols the rule applies to Type array 4.1.5. .spec.egress[].ports[] Description EgressFirewallPort specifies the port to allow or deny traffic to Type object Required port protocol Property Type Description port integer port that the traffic must match protocol string protocol (tcp, udp, sctp) that the traffic must match. 4.1.6. .spec.egress[].to Description to is the target that traffic is allowed/denied to Type object Property Type Description cidrSelector string cidrSelector is the CIDR range to allow/deny traffic to. If this is set, dnsName and nodeSelector must be unset. dnsName string dnsName is the domain name to allow/deny traffic to. If this is set, cidrSelector and nodeSelector must be unset. nodeSelector object nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. 4.1.7. .spec.egress[].to.nodeSelector Description nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.8. .spec.egress[].to.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.9. .spec.egress[].to.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.1.10. .status Description Observed status of EgressFirewall Type object Property Type Description status string 4.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressfirewalls GET : list objects of kind EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls DELETE : delete collection of EgressFirewall GET : list objects of kind EgressFirewall POST : create an EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} DELETE : delete an EgressFirewall GET : read the specified EgressFirewall PATCH : partially update the specified EgressFirewall PUT : replace the specified EgressFirewall 4.2.1. /apis/k8s.ovn.org/v1/egressfirewalls Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind EgressFirewall Table 4.2. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty 4.2.2. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls Table 4.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of EgressFirewall Table 4.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressFirewall Table 4.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.8. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressFirewall Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.10. Body parameters Parameter Type Description body EgressFirewall schema Table 4.11. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 202 - Accepted EgressFirewall schema 401 - Unauthorized Empty 4.2.3. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} Table 4.12. Global path parameters Parameter Type Description name string name of the EgressFirewall namespace string object name and auth scope, such as for teams and projects Table 4.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an EgressFirewall Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.15. Body parameters Parameter Type Description body DeleteOptions schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressFirewall Table 4.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.18. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressFirewall Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.20. Body parameters Parameter Type Description body Patch schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressFirewall Table 4.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.23. Body parameters Parameter Type Description body EgressFirewall schema Table 4.24. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_apis/egressfirewall-k8s-ovn-org-v1 |
7.177. perl-IPC-Run3 | 7.177. perl-IPC-Run3 7.177.1. RHBA-2012:1440 - perl-IPC-Run3 bug fix update Updated perl-IPC-Run3 packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The perl-IPC-Run3 packages provide a module to run subprocesses and redirect the stdin, stdout, and stderr functionalities to files and perl data structures. The perl-IPC-Run3 package allows to use system, qx, and open3 modules with a simple API. Bug Fix BZ# 657487 Prior to this update, binary perl-IPC-Run3 packages failed to build if the perl-Time-HiRes module was not installed. This update adds the perl-Time-HiRes package to the build-time dependencies for perl-IPC-Run3. BZ# 870089 Prior to this update, tests that called the IP-Run3 profiler failed when the internal perl-IPC-Run3 test suite was used. This update, adds run-time dependencies on perl(Getopt::Long) and perl(Time::HiRes) to the perl-IPC-Run3 package because certain IP-Run3 functions require the perl modules. Now, the IPC-Run3 profiler runs as expected. All users of perl-IPC-Run3 are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/perl-ipc-run3 |
Chapter 4. Monitoring OpenShift sandboxed containers | Chapter 4. Monitoring OpenShift sandboxed containers You can use the OpenShift Container Platform web console to monitor metrics related to the health status of your sandboxed workloads and nodes. OpenShift sandboxed containers has a pre-configured dashboard available in the web console, and administrators can also access and query raw metrics through Prometheus. 4.1. About OpenShift sandboxed containers metrics OpenShift sandboxed containers metrics enable administrators to monitor how their sandboxed containers are running. You can query for these metrics in Metrics UI in the web console. OpenShift sandboxed containers metrics are collected for the following categories: Kata agent metrics Kata agent metrics display information about the kata agent process running in the VM embedded in your sandboxed containers. These metrics include data from /proc/<pid>/[io, stat, status] . Kata guest OS metrics Kata guest OS metrics display data from the guest OS running in your sandboxed containers. These metrics include data from /proc/[stats, diskstats, meminfo, vmstats] and /proc/net/dev . Hypervisor metrics Hypervisor metrics display data regarding the hypervisor running the VM embedded in your sandboxed containers. These metrics mainly include data from /proc/<pid>/[io, stat, status] . Kata monitor metrics Kata monitor is the process that gathers metric data and makes it available to Prometheus. The kata monitor metrics display detailed information about the resource usage of the kata-monitor process itself. These metrics also include counters from Prometheus data collection. Kata containerd shim v2 metrics Kata containerd shim v2 metrics display detailed information about the kata shim process. These metrics include data from /proc/<pid>/[io, stat, status] and detailed resource usage metrics. 4.2. Viewing metrics for OpenShift sandboxed containers You can access the metrics for OpenShift sandboxed containers in the Metrics page in the web console. Prerequisites You have OpenShift Container Platform 4.10 installed. You have OpenShift sandboxed containers installed. You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. Procedure From the Administrator perspective in the web console, navigate to Observe Metrics . In the input field, enter the query for the metric you want to observe. All kata-related metrics begin with kata . Typing kata will display a list with all of the available kata metrics. The metrics from your query are visualized on the page. Additional resources For more information about creating PromQL queries to view metrics, see Querying Metrics . 4.3. Viewing the OpenShift sandboxed containers dashboard You can access the OpenShift sandboxed containers dashboard in the Dashboards page in the web console. Prerequisites You have OpenShift Container Platform 4.10 installed. You have OpenShift sandboxed containers installed. You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. Procedure From the Administrator perspective in the web console, navigate to Observe Dashboards . From the Dashboard drop-down list, select the Sandboxed Containers dashboard. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Define the date and time range for the data you want to view. Click Save to save the custom time range. Optional: Select a Refresh Interval . The dashboard appears on the page with the following metrics from the Kata guest OS category: Number of running VMs Displays the total number of sandboxed containers running on your cluster. CPU Usage (per VM) Displays the CPU usage for each individual sandboxed container. Memory Usage (per VM) Displays the memory usage for each individual sandboxed container. Hover over each of the graphs within a dashboard to display detailed information about specific items. 4.4. Additional resources For more information about gathering data for support, see Gathering data about your cluster . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/sandboxed_containers_support_for_openshift/monitoring-sandboxed-containers |
Chapter 26. Support for FIPS cryptography | Chapter 26. Support for FIPS cryptography You can install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . For the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your cluster, this change is applied when the machines are deployed based on the status of an option in the install-config.yaml file, which governs the cluster options that a user can change during cluster deployment. With Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. These configuration methods ensure that your cluster meets the requirements of a FIPS compliance audit: only FIPS validated or Modules In Process cryptography packages are enabled before the initial system boot. Because FIPS must be enabled before the operating system that your cluster uses boots for the first time, you cannot enable FIPS after you deploy a cluster. 26.1. FIPS validation in OpenShift Container Platform OpenShift Container Platform uses certain FIPS validated or Modules In Process modules within RHEL and RHCOS for the operating system components that it uses. See RHEL8 core crypto components . For example, when users use SSH to connect to OpenShift Container Platform clusters and containers, those connections are properly encrypted. OpenShift Container Platform components are written in Go and built with Red Hat's golang compiler. When you enable FIPS mode for your cluster, all OpenShift Container Platform components that require cryptographic signing call RHEL and RHCOS cryptographic libraries. Table 26.1. FIPS mode attributes and limitations in OpenShift Container Platform 4.10 Attributes Limitations FIPS support in RHEL 8 and RHCOS operating systems. The FIPS implementation does not offer a single function that both computes hash functions and validates the keys that are based on that hash. This limitation will continue to be evaluated and improved in future OpenShift Container Platform releases. FIPS support in CRI-O runtimes. FIPS support in OpenShift Container Platform services. FIPS validated or Modules In Process cryptographic module and algorithms that are obtained from RHEL 8 and RHCOS binaries and images. Use of FIPS compatible golang compiler. TLS FIPS support is not complete but is planned for future OpenShift Container Platform releases. FIPS support across multiple architectures. FIPS is currently only supported on OpenShift Container Platform deployments using the x86_64 architecture. 26.2. FIPS support in components that the cluster uses Although the OpenShift Container Platform cluster itself uses FIPS validated or Modules In Process modules, ensure that the systems that support your OpenShift Container Platform cluster use FIPS validated or Modules In Process modules for cryptography. 26.2.1. etcd To ensure that the secrets that are stored in etcd use FIPS validated or Modules In Process encryption, boot the node in FIPS mode. After you install the cluster in FIPS mode, you can encrypt the etcd data by using the FIPS-approved aes cbc cryptographic algorithm. 26.2.2. Storage For local storage, use RHEL-provided disk encryption or Container Native Storage that uses RHEL-provided disk encryption. By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS validated or Modules In Process encryption. You can configure your cluster to encrypt the root filesystem of each node, as described in Customizing nodes . 26.2.3. Runtimes To ensure that containers know that they are running on a host that is using FIPS validated or Modules In Process cryptography modules, use CRI-O to manage your runtimes. CRI-O supports FIPS mode, in that it configures the containers to know that they are running in FIPS mode. 26.3. Installing a cluster in FIPS mode To install a cluster in FIPS mode, follow the instructions to install a customized cluster on your preferred infrastructure. Ensure that you set fips: true in the install-config.yaml file before you deploy your cluster. Amazon Web Services Alibaba Cloud Microsoft Azure Bare metal Google Cloud Platform Red Hat OpenStack Platform (RHOSP) VMware vSphere Note If you are using Azure File storage, you cannot enable FIPS mode. To apply AES CBC encryption to your etcd data store, follow the Encrypting etcd data process after you install your cluster. If you add RHEL nodes to your cluster, ensure that you enable FIPS mode on the machines before their initial boot. See Adding RHEL compute machines to an OpenShift Container Platform cluster and Enabling FIPS Mode in the RHEL 8 documentation. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/installing/installing-fips |
1.5. Load Balancing | 1.5. Load Balancing 1.5.1. Configure Load Balancing The Teiid JDBC driver does not perform true load-balancing. You can use it to route queries across the host:port combinations defined in the URL but it will not do it based on the load. Instead, you need to use HAProxy. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-load_balancing |
Chapter 18. Hardware networks | Chapter 18. Hardware networks 18.1. About Single Root I/O Virtualization (SR-IOV) hardware networks The Single Root I/O Virtualization (SR-IOV) specification is a standard for a type of PCI device assignment that can share a single device with multiple pods. You can configure a Single Root I/O Virtualization (SR-IOV) device in your cluster by using the SR-IOV Operator . SR-IOV can segment a compliant network device, recognized on the host node as a physical function (PF), into multiple virtual functions (VFs). The VF is used like any other network device. The SR-IOV network device driver for the device determines how the VF is exposed in the container: netdevice driver: A regular kernel network device in the netns of the container vfio-pci driver: A character device mounted in the container You can use SR-IOV network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency. You can configure multi-network policies for SR-IOV networks. The support for this is technology preview and SR-IOV additional networks are only supported with kernel NICs. They are not supported for Data Plane Development Kit (DPDK) applications. Note Creating multi-network policies on SR-IOV networks might not deliver the same performance to applications compared to SR-IOV networks without a multi-network policy configured. Important Multi-network policies for SR-IOV network is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can enable SR-IOV on a node by using the following command: USD oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable="true" Additional resources Installing the SR-IOV Network Operator 18.1.1. Components that manage SR-IOV network devices The SR-IOV Network Operator creates and manages the components of the SR-IOV stack. The Operator performs the following functions: Orchestrates discovery and management of SR-IOV network devices Generates NetworkAttachmentDefinition custom resources for the SR-IOV Container Network Interface (CNI) Creates and updates the configuration of the SR-IOV network device plugin Creates node specific SriovNetworkNodeState custom resources Updates the spec.interfaces field in each SriovNetworkNodeState custom resource The Operator provisions the following components: SR-IOV network configuration daemon A daemon set that is deployed on worker nodes when the SR-IOV Network Operator starts. The daemon is responsible for discovering and initializing SR-IOV network devices in the cluster. SR-IOV Network Operator webhook A dynamic admission controller webhook that validates the Operator custom resource and sets appropriate default values for unset fields. SR-IOV Network resources injector A dynamic admission controller webhook that provides functionality for patching Kubernetes pod specifications with requests and limits for custom network resources such as SR-IOV VFs. The SR-IOV network resources injector adds the resource field to only the first container in a pod automatically. SR-IOV network device plugin A device plugin that discovers, advertises, and allocates SR-IOV network virtual function (VF) resources. Device plugins are used in Kubernetes to enable the use of limited resources, typically in physical devices. Device plugins give the Kubernetes scheduler awareness of resource availability, so that the scheduler can schedule pods on nodes with sufficient resources. SR-IOV CNI plugin A CNI plugin that attaches VF interfaces allocated from the SR-IOV network device plugin directly into a pod. SR-IOV InfiniBand CNI plugin A CNI plugin that attaches InfiniBand (IB) VF interfaces allocated from the SR-IOV network device plugin directly into a pod. Note The SR-IOV Network resources injector and SR-IOV Network Operator webhook are enabled by default and can be disabled by editing the default SriovOperatorConfig CR. Use caution when disabling the SR-IOV Network Operator Admission Controller webhook. You can disable the webhook under specific circumstances, such as troubleshooting, or if you want to use unsupported devices. 18.1.1.1. Supported platforms The SR-IOV Network Operator is supported on the following platforms: Bare metal Red Hat OpenStack Platform (RHOSP) 18.1.1.2. Supported devices OpenShift Container Platform supports the following network interface controllers: Table 18.1. Supported network interface controllers Manufacturer Model Vendor ID Device ID Broadcom BCM57414 14e4 16d7 Broadcom BCM57508 14e4 1750 Broadcom BCM57504 14e4 1751 Intel X710 8086 1572 Intel X710 Backplane 8086 1581 Intel X710 Base T 8086 15ff Intel XL710 8086 1583 Intel XXV710 8086 158b Intel E810-CQDA2 8086 1592 Intel E810-2CQDA2 8086 1592 Intel E810-XXVDA2 8086 159b Intel E810-XXVDA4 8086 1593 Intel E810-XXVDA4T 8086 1593 Intel Ice E810-XXV Backplane 8086 1599 Intel Ice E823L Backplane 8086 124c Intel Ice E823L SFP 8086 124d Marvell OCTEON Fusion CNF105XX 177d ba00 Marvell OCTEON10 CN10XXX 1177d b900 Mellanox MT27700 Family [ConnectX‐4] 15b3 1013 Mellanox MT27710 Family [ConnectX‐4 Lx] 15b3 1015 Mellanox MT27800 Family [ConnectX‐5] 15b3 1017 Mellanox MT28880 Family [ConnectX‐5 Ex] 15b3 1019 Mellanox MT28908 Family [ConnectX‐6] 15b3 101b Mellanox MT2892 Family [ConnectX‐6 Dx] 15b3 101d Mellanox MT2894 Family [ConnectX‐6 Lx] 15b3 101f Mellanox Mellanox MT2910 Family [ConnectX‐7] 15b3 1021 Mellanox MT42822 BlueField‐2 in ConnectX‐6 NIC mode 15b3 a2d6 Pensando [1] DSC-25 dual-port 25G distributed services card for ionic driver 0x1dd8 0x1002 Pensando [1] DSC-100 dual-port 100G distributed services card for ionic driver 0x1dd8 0x1003 Silicom STS Family 8086 1591 OpenShift SR-IOV is supported, but you must set a static, Virtual Function (VF) media access control (MAC) address using the SR-IOV CNI config file when using SR-IOV. Note For the most up-to-date list of supported cards and compatible OpenShift Container Platform versions available, see Openshift Single Root I/O Virtualization (SR-IOV) and PTP hardware networks Support Matrix . 18.1.2. Additional resources Configuring multi-network policy 18.1.3. steps Configuring the SR-IOV Network Operator Configuring an SR-IOV network device If you use OpenShift Virtualization: Connecting a virtual machine to an SR-IOV network Configuring an SR-IOV network attachment Ethernet network attachement: Adding a pod to an SR-IOV additional network InfiniBand network attachement: Adding a pod to an SR-IOV additional network 18.2. Configuring an SR-IOV network device You can configure a Single Root I/O Virtualization (SR-IOV) device in your cluster. Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . 18.2.1. SR-IOV network node configuration object You specify the SR-IOV network device configuration for a node by creating an SR-IOV network node policy. The API object for the policy is part of the sriovnetwork.openshift.io API group. The following YAML describes an SR-IOV network node policy: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 externallyManaged: false 9 nicSelector: 10 vendor: "<vendor_code>" 11 deviceID: "<device_id>" 12 pfNames: ["<pf_name>", ...] 13 rootDevices: ["<pci_bus_id>", ...] 14 netFilter: "<filter_string>" 15 deviceType: <device_type> 16 isRdma: false 17 linkType: <link_type> 18 eSwitchMode: "switchdev" 19 excludeTopology: false 20 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. When specifying a name, be sure to use the accepted syntax expression ^[a-zA-Z0-9_]+USD in the resourceName . 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. Important The SR-IOV Network Operator applies node network configuration policies to nodes in sequence. Before applying node network configuration policies, the SR-IOV Network Operator checks if the machine config pool (MCP) for a node is in an unhealthy state such as Degraded or Updating . If a node is in an unhealthy MCP, the process of applying node network configuration policies to all targeted nodes in the cluster pauses until the MCP returns to a healthy state. To avoid a node in an unhealthy MCP from blocking the application of node network configuration policies to other nodes, including nodes in other MCPs, you must create a separate node network configuration policy for each MCP. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 Optional: The maximum transmission unit (MTU) of the physical function and all its virtual functions. The maximum MTU value can vary for different network interface controller (NIC) models. Important If you want to create virtual function on the default network interface, ensure that the MTU is set to a value that matches the cluster MTU. If you want to modify the MTU of a single virtual function while the function is assigned to a pod, leave the MTU value blank in the SR-IOV network node policy. Otherwise, the SR-IOV Network Operator reverts the MTU of the virtual function to the MTU value defined in the SR-IOV network node policy, which might trigger a node drain. 7 Optional: Set needVhostNet to true to mount the /dev/vhost-net device in the pod. Use the mounted /dev/vhost-net device with Data Plane Development Kit (DPDK) to forward traffic to the kernel network stack. 8 The number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 9 The externallyManaged field indicates whether the SR-IOV Network Operator manages all, or only a subset of virtual functions (VFs). With the value set to false the SR-IOV Network Operator manages and configures all VFs on the PF. Note When externallyManaged is set to true , you must manually create the Virtual Functions (VFs) on the physical function (PF) before applying the SriovNetworkNodePolicy resource. If the VFs are not pre-created, the SR-IOV Network Operator's webhook will block the policy request. When externallyManaged is set to false , the SR-IOV Network Operator automatically creates and manages the VFs, including resetting them if necessary. To use VFs on the host system, you must create them through NMState, and set externallyManaged to true . In this mode, the SR-IOV Network Operator does not modify the PF or the manually managed VFs, except for those explicitly defined in the nicSelector field of your policy. However, the SR-IOV Network Operator continues to manage VFs that are used as pod secondary interfaces. 10 The NIC selector identifies the device to which this resource applies. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 11 Optional: The vendor hexadecimal vendor identifier of the SR-IOV network device. The only allowed values are 8086 (Intel) and 15b3 (Mellanox). 12 Optional: The device hexadecimal device identifier of the SR-IOV network device. For example, 101b is the device ID for a Mellanox ConnectX-6 device. 13 Optional: An array of one or more physical function (PF) names the resource must apply to. 14 Optional: An array of one or more PCI bus addresses the resource must apply to. For example 0000:02:00.1 . 15 Optional: The platform-specific network filter. The only supported platform is Red Hat OpenStack Platform (RHOSP). Acceptable values use the following format: openstack/NetworkID:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx . Replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx with the value from the /var/config/openstack/latest/network_data.json metadata file. This filter ensures that VFs are associated with a specific OpenStack network. The operator uses this filter to map the VFs to the appropriate network based on metadata provided by the OpenStack platform. 16 Optional: The driver to configure for the VFs created from this resource. The only allowed values are netdevice and vfio-pci . The default value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, use the netdevice driver type and set isRdma to true . 17 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. Note You cannot set the isRdma parameter to true for intel NICs. 18 Optional: The link type for the VFs. The default value is eth for Ethernet. Change this value to 'ib' for InfiniBand. When linkType is set to ib , isRdma is automatically set to true by the SR-IOV Network Operator webhook. When linkType is set to ib , deviceType should not be set to vfio-pci . Do not set linkType to eth for SriovNetworkNodePolicy, because this can lead to an incorrect number of available devices reported by the device plugin. 19 Optional: To enable hardware offloading, you must set the eSwitchMode field to "switchdev" . For more information about hardware offloading, see "Configuring hardware offloading". 20 Optional: To exclude advertising an SR-IOV network resource's NUMA node to the Topology Manager, set the value to true . The default value is false . 18.2.1.1. SR-IOV network node configuration examples The following example describes the configuration for an InfiniBand device: Example configuration for an InfiniBand device apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: <num> nicSelector: vendor: "<vendor_code>" deviceID: "<device_id>" rootDevices: - "<pci_bus_id>" linkType: <link_type> isRdma: true # ... The following example describes the configuration for an SR-IOV network device in a RHOSP virtual machine: Example configuration for an SR-IOV device in a virtual machine apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 1 1 nicSelector: vendor: "<vendor_code>" deviceID: "<device_id>" netFilter: "openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509" 2 # ... 1 When configuring the node network policy for a virtual machine, the numVfs parameter is always set to 1 . 2 When the virtual machine is deployed on RHOSP, the netFilter parameter must refer to a network ID. Valid values for netFilter are available from an SriovNetworkNodeState object. 18.2.1.2. Automated discovery of SR-IOV network devices The SR-IOV Network Operator searches your cluster for SR-IOV capable network devices on worker nodes. The Operator creates and updates a SriovNetworkNodeState custom resource (CR) for each worker node that provides a compatible SR-IOV network device. The CR is assigned the same name as the worker node. The status.interfaces list provides information about the network devices on a node. Important Do not modify a SriovNetworkNodeState object. The Operator creates and manages these resources automatically. 18.2.1.2.1. Example SriovNetworkNodeState object The following YAML is an example of a SriovNetworkNodeState object created by the SR-IOV Network Operator: An SriovNetworkNodeState object apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: "39824" status: interfaces: 2 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: "0000:18:00.0" totalvfs: 8 vendor: 15b3 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: "0000:18:00.1" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: "8086" syncStatus: Succeeded 1 The value of the name field is the same as the name of the worker node. 2 The interfaces stanza includes a list of all of the SR-IOV devices discovered by the Operator on the worker node. 18.2.1.3. Virtual function (VF) partitioning for SR-IOV devices In some cases, you might want to split virtual functions (VFs) from the same physical function (PF) into multiple resource pools. For example, you might want some of the VFs to load with the default driver and the remaining VFs load with the vfio-pci driver. In such a deployment, the pfNames selector in your SriovNetworkNodePolicy custom resource (CR) can be used to specify a range of VFs for a pool using the following format: <pfname>#<first_vf>-<last_vf> . For example, the following YAML shows the selector for an interface named netpf0 with VF 2 through 7 : pfNames: ["netpf0#2-7"] netpf0 is the PF interface name. 2 is the first VF index (0-based) that is included in the range. 7 is the last VF index (0-based) that is included in the range. You can select VFs from the same PF by using different policy CRs if the following requirements are met: The numVfs value must be identical for policies that select the same PF. The VF index must be in the range of 0 to <numVfs>-1 . For example, if you have a policy with numVfs set to 8 , then the <first_vf> value must not be smaller than 0 , and the <last_vf> must not be larger than 7 . The VFs ranges in different policies must not overlap. The <first_vf> must not be larger than the <last_vf> . The following example illustrates NIC partitioning for an SR-IOV device. The policy policy-net-1 defines a resource pool net-1 that contains the VF 0 of PF netpf0 with the default VF driver. The policy policy-net-1-dpdk defines a resource pool net-1-dpdk that contains the VF 8 to 15 of PF netpf0 with the vfio VF driver. Policy policy-net-1 : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#0-0"] deviceType: netdevice Policy policy-net-1-dpdk : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#8-15"] deviceType: vfio-pci Verifying that the interface is successfully partitioned Confirm that the interface partitioned to virtual functions (VFs) for the SR-IOV device by running the following command. USD ip link show <interface> 1 1 Replace <interface> with the interface that you specified when partitioning to VFs for the SR-IOV device, for example, ens3f1 . Example output 5: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 3c:fd:fe:d1:bc:01 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 5a:e7:88:25:ea:a0 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 3e:1d:36:d7:3d:49 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 2 link/ether ce:09:56:97:df:f9 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether 5e:91:cf:88:d1:38 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether e6:06:a1:96:2f:de brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off 18.2.1.4. A test pod template for clusters that use SR-IOV on OpenStack The following testpmd pod demonstrates container creation with huge pages, reserved CPUs, and the SR-IOV port. An example testpmd pod apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: cpu-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" # ... spec: containers: - name: testpmd command: ["sleep", "99999"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: ["IPC_LOCK","SYS_ADMIN"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/sriov1: 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/sriov1: 1 volumeMounts: - mountPath: /dev/hugepages name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 1 volumes: - name: hugepage emptyDir: medium: HugePages 1 This example assumes that the name of the performance profile is cnf-performance profile . 18.2.1.5. A test pod template for clusters that use OVS hardware offloading on OpenStack The following testpmd pod demonstrates Open vSwitch (OVS) hardware offloading on Red Hat OpenStack Platform (RHOSP). An example testpmd pod apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: k8s.v1.cni.cncf.io/networks: hwoffload1 spec: runtimeClassName: performance-cnf-performanceprofile 1 containers: - name: testpmd command: ["sleep", "99999"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: ["IPC_LOCK","SYS_ADMIN"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False volumes: - name: hugepage emptyDir: medium: HugePages 1 If your performance profile is not named cnf-performance profile , replace that string with the correct performance profile name. 18.2.1.6. Huge pages resource injection for Downward API When a pod specification includes a resource request or limit for huge pages, the Network Resources Injector automatically adds Downward API fields to the pod specification to provide the huge pages information to the container. The Network Resources Injector adds a volume that is named podnetinfo and is mounted at /etc/podnetinfo for each container in the pod. The volume uses the Downward API and includes a file for huge pages requests and limits. The file naming convention is as follows: /etc/podnetinfo/hugepages_1G_request_<container-name> /etc/podnetinfo/hugepages_1G_limit_<container-name> /etc/podnetinfo/hugepages_2M_request_<container-name> /etc/podnetinfo/hugepages_2M_limit_<container-name> The paths specified in the list are compatible with the app-netutil library. By default, the library is configured to search for resource information in the /etc/podnetinfo directory. If you choose to specify the Downward API path items yourself manually, the app-netutil library searches for the following paths in addition to the paths in the list. /etc/podnetinfo/hugepages_request /etc/podnetinfo/hugepages_limit /etc/podnetinfo/hugepages_1G_request /etc/podnetinfo/hugepages_1G_limit /etc/podnetinfo/hugepages_2M_request /etc/podnetinfo/hugepages_2M_limit As with the paths that the Network Resources Injector can create, the paths in the preceding list can optionally end with a _<container-name> suffix. 18.2.2. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. Reboot only happens in the following cases: With Mellanox NICs ( mlx5 driver) a node reboot happens every time the number of virtual functions (VFs) increase on a physical function (PF). With Intel NICs, a reboot only happens if the kernel parameters do not include intel_iommu=on and iommu=pt . It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Additional resources Understanding how to update labels on nodes . 18.2.3. Creating a non-uniform memory access (NUMA) aligned SR-IOV pod You can create a NUMA aligned SR-IOV pod by restricting SR-IOV and the CPU resources allocated from the same NUMA node with restricted or single-numa-node Topology Manager polices. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured the CPU Manager policy to static . For more information on CPU Manager, see the "Additional resources" section. You have configured the Topology Manager policy to single-numa-node . Note When single-numa-node is unable to satisfy the request, you can configure the Topology Manager policy to restricted . For more flexible SR-IOV network resource scheduling, see Excluding SR-IOV network topology during NUMA-aware scheduling in the Additional resources section. Procedure Create the following SR-IOV pod spec, and then save the YAML in the <name>-sriov-pod.yaml file. Replace <name> with a name for this pod. The following example shows an SR-IOV pod spec: apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: ["sleep", "infinity"] resources: limits: memory: "1Gi" 3 cpu: "2" 4 requests: memory: "1Gi" cpu: "2" 1 Replace <name> with the name of the SR-IOV network attachment definition CR. 2 Replace <image> with the name of the sample-pod image. 3 To create the SR-IOV pod with guaranteed QoS, set memory limits equal to memory requests . 4 To create the SR-IOV pod with guaranteed QoS, set cpu limits equals to cpu requests . Create the sample SR-IOV pod by running the following command: USD oc create -f <filename> 1 1 Replace <filename> with the name of the file you created in the step. Confirm that the sample-pod is configured with guaranteed QoS. USD oc describe pod sample-pod Confirm that the sample-pod is allocated with exclusive CPUs. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus Confirm that the SR-IOV device and CPUs that are allocated for the sample-pod are on the same NUMA node. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus 18.2.4. Exclude the SR-IOV network topology for NUMA-aware scheduling You can exclude advertising the Non-Uniform Memory Access (NUMA) node for the SR-IOV network to the Topology Manager for more flexible SR-IOV network deployments during NUMA-aware pod scheduling. In some scenarios, it is a priority to maximize CPU and memory resources for a pod on a single NUMA node. By not providing a hint to the Topology Manager about the NUMA node for the pod's SR-IOV network resource, the Topology Manager can deploy the SR-IOV network resource and the pod CPU and memory resources to different NUMA nodes. This can add to network latency because of the data transfer between NUMA nodes. However, it is acceptable in scenarios when workloads require optimal CPU and memory performance. For example, consider a compute node, compute-1 , that features two NUMA nodes: numa0 and numa1 . The SR-IOV-enabled NIC is present on numa0 . The CPUs available for pod scheduling are present on numa1 only. By setting the excludeTopology specification to true , the Topology Manager can assign CPU and memory resources for the pod to numa1 and can assign the SR-IOV network resource for the same pod to numa0 . This is only possible when you set the excludeTopology specification to true . Otherwise, the Topology Manager attempts to place all resources on the same NUMA node. 18.2.5. Troubleshooting SR-IOV configuration After following the procedure to configure an SR-IOV network device, the following sections address some error conditions. To display the state of nodes, run the following command: USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> where: <node_name> specifies the name of a node with an SR-IOV network device. Error output: Cannot allocate memory "lastSyncError": "write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory" When a node indicates that it cannot allocate memory, check the following items: Confirm that global SR-IOV settings are enabled in the BIOS for the node. Confirm that VT-d is enabled in the BIOS for the node. Additional resources Using CPU Manager 18.2.6. steps Configuring an SR-IOV network attachment 18.3. Configuring an SR-IOV Ethernet network attachment You can configure an Ethernet network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . 18.3.1. Ethernet device configuration object You can configure an Ethernet network device by defining an SriovNetwork object. The following YAML describes an SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: "<trust_vf>" 12 capabilities: <capabilities> 13 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Optional: A Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: The spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the object is rejected by the SR-IOV Network Operator. 7 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 8 Optional: The link state of virtual function (VF). Allowed value are enable , disable and auto . 9 Optional: A maximum transmission rate, in Mbps, for the VF. 10 Optional: A minimum transmission rate, in Mbps, for the VF. This value must be less than or equal to the maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 11 Optional: An IEEE 802.1p priority level for the VF. The default value is 0 . 12 Optional: The trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value that you specify in quotes, or the SR-IOV Network Operator rejects the object. 13 Optional: The capabilities to configure for this additional network. You can specify '{ "ips": true }' to enable IP address support or '{ "mac": true }' to enable MAC address support. 18.3.1.1. Creating a configuration for assignment of dual-stack IP addresses dynamically Dual-stack IP address assignment can be configured with the ipRanges parameter for: IPv4 addresses IPv6 addresses multiple IP address assignment Procedure Set type to whereabouts . Use ipRanges to allocate IP addresses as shown in the following example: cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } } Attach network to a pod. For more information, see "Adding a pod to an additional network". Verify that all IP addresses are assigned. Run the following command to ensure the IP addresses are assigned as metadata. USD oc exec -it mypod -- ip a 18.3.1.2. Configuration of IP address assignment for a network attachment The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 18.3.1.2.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 18.2. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 18.3. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 18.4. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 18.5. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 18.3.1.2.2. Dynamic IP address (DHCP) assignment configuration A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. Important For an Ethernet network attachment, the SR-IOV Network Operator does not create a DHCP server deployment; the Cluster Network Operator is responsible for creating the minimal DHCP server deployment. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... The following table describes the configuration parameters for dynamic IP address address assignment with DHCP. Table 18.6. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. The following JSON example describes the configuration p for dynamic IP address address assignment with DHCP. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 18.3.1.2.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The Whereabouts CNI plugin also supports overlapping IP address ranges and configuration of the same CIDR range multiple times within separate NetworkAttachmentDefinition CRDs. This provides greater flexibility and management capabilities in multi-tenant environments. 18.3.1.2.3.1. Dynamic IP address configuration objects The following table describes the configuration objects for dynamic IP address assignment with Whereabouts: Table 18.7. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. network_name string Optional: Helps ensure that each group or domain of pods gets its own set of IP addresses, even if they share the same range of IP addresses. Setting this field is important for keeping networks separate and organized, notably in multi-tenant environments. 18.3.1.2.3.2. Dynamic IP address assignment configuration that uses Whereabouts The following example shows a dynamic address assignment configuration that uses Whereabouts: Whereabouts dynamic IP address assignment { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 18.3.1.2.3.3. Dynamic IP address assignment that uses Whereabouts with overlapping IP address ranges The following example shows a dynamic IP address assignment that uses overlapping IP address ranges for multi-tenant networks. NetworkAttachmentDefinition 1 { "ipam": { "type": "whereabouts", "range": "192.0.2.192/29", "network_name": "example_net_common", 1 } } 1 Optional. If set, must match the network_name of NetworkAttachmentDefinition 2 . NetworkAttachmentDefinition 2 { "ipam": { "type": "whereabouts", "range": "192.0.2.192/24", "network_name": "example_net_common", 1 } } 1 Optional. If set, must match the network_name of NetworkAttachmentDefinition 1 . 18.3.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to any pods in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 18.3.3. Assigning an SR-IOV network to a VRF As a cluster administrator, you can assign an SR-IOV network interface to your VRF domain by using the CNI VRF plugin. To do this, add the VRF configuration to the optional metaPlugins parameter of the SriovNetwork resource. Note Applications that use VRFs need to bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. SO_BINDTODEVICE binds the socket to a device that is specified in the passed interface name, for example, eth1 . To use SO_BINDTODEVICE , the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. 18.3.3.1. Creating an additional SR-IOV network attachment with the CNI VRF plugin The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional SR-IOV network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml . Example SriovNetwork custom resource (CR) example apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } vlan: 0 resourceName: intelnics metaPlugins : | { "type": "vrf", 1 "vrfname": "example-vrf-name" 2 } 1 type must be set to vrf . 2 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. Create the SriovNetwork resource: USD oc create -f sriov-network-attachment.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-sriov-network-1 . Example output NAME AGE additional-sriov-network-1 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network attachment is successful To verify that the VRF CNI is correctly configured and that the additional SR-IOV network attachment is attached, do the following: Create an SR-IOV network that uses the VRF CNI. Assign the network to a pod. Verify that the pod network attachment is connected to the SR-IOV additional network. Remote shell into the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- red 10 Confirm that the VRF interface is master of the secondary interface by running the following command: USD ip link Example output ... 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode ... 18.3.4. Runtime configuration for an Ethernet-based SR-IOV attachment When attaching a pod to an additional network, you can specify a runtime configuration to make specific customizations for the pod. For example, you can request a specific MAC hardware address. You specify the runtime configuration by setting an annotation in the pod specification. The annotation key is k8s.v1.cni.cncf.io/networks , and it accepts a JSON object that describes the runtime configuration. The following JSON describes the runtime configuration options for an Ethernet-based SR-IOV network attachment. [ { "name": "<name>", 1 "mac": "<mac_address>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "net1", "mac": "20:04:0f:f1:88:01", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 18.3.5. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 18.3.6. Configuring parallel node draining during SR-IOV network policy updates By default, the SR-IOV Network Operator drains workloads from a node before every policy change. The Operator performs this action, one node at a time, to ensure that no workloads are affected by the reconfiguration. In large clusters, draining nodes sequentially can be time-consuming, taking hours or even days. In time-sensitive environments, you can enable parallel node draining in an SriovNetworkPoolConfig custom resource (CR) for faster rollouts of SR-IOV network configurations. To configure parallel draining, use the SriovNetworkPoolConfig CR to create a node pool. You can then add nodes to the pool and define the maximum number of nodes in the pool that the Operator can drain in parallel. With this approach, you can enable parallel draining for faster reconfiguration while ensuring you still have enough nodes remaining in the pool to handle any running workloads. Note A node can only belong to one SR-IOV network pool configuration. If a node is not part of a pool, it is added to a virtual, default, pool that is configured to drain one node at a time only. The node might restart during the draining process. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the SR-IOV Network Operator. Nodes have hardware that support SR-IOV. Procedure Create a SriovNetworkPoolConfig resource: Create a YAML file that defines the SriovNetworkPoolConfig resource: Example sriov-nw-pool.yaml file apiVersion: v1 kind: SriovNetworkPoolConfig metadata: name: pool-1 1 namespace: openshift-sriov-network-operator 2 spec: maxUnavailable: 2 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/worker: "" 1 Specify the name of the SriovNetworkPoolConfig object. 2 Specify namespace where the SR-IOV Network Operator is installed. 3 Specify an integer number, or percentage value, for nodes that can be unavailable in the pool during an update. For example, if you have 10 nodes and you set the maximum unavailable to 2, then only 2 nodes can be drained in parallel at any time, leaving 8 nodes for handling workloads. 4 Specify the nodes to add the pool by using the node selector. This example adds all nodes with the worker role to the pool. Create the SriovNetworkPoolConfig resource by running the following command: USD oc create -f sriov-nw-pool.yaml Create the sriov-test namespace by running the following comand: USD oc create namespace sriov-test Create a SriovNetworkNodePolicy resource: Create a YAML file that defines the SriovNetworkNodePolicy resource: Example sriov-node-policy.yaml file apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: ["ens1"] nodeSelector: node-role.kubernetes.io/worker: "" numVfs: 5 priority: 99 resourceName: sriov_nic_1 Create the SriovNetworkNodePolicy resource by running the following command: USD oc create -f sriov-node-policy.yaml Create a SriovNetwork resource: Create a YAML file that defines the SriovNetwork resource: Example sriov-network.yaml file apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: linkState: auto networkNamespace: sriov-test resourceName: sriov_nic_1 capabilities: '{ "mac": true, "ips": true }' ipam: '{ "type": "static" }' Create the SriovNetwork resource by running the following command: USD oc create -f sriov-network.yaml Verification View the node pool you created by running the following command: USD oc get sriovNetworkpoolConfig -n openshift-sriov-network-operator Example output NAME AGE pool-1 67s 1 1 In this example, pool-1 contains all the nodes with the worker role. To demonstrate the node draining process using the example scenario from the above procedure, complete the following steps: Update the number of virtual functions in the SriovNetworkNodePolicy resource to trigger workload draining in the cluster: USD oc patch SriovNetworkNodePolicy sriov-nic-1 -n openshift-sriov-network-operator --type merge -p '{"spec": {"numVfs": 4}}' Monitor the draining status on the target cluster by running the following command: USD oc get sriovNetworkNodeState -n openshift-sriov-network-operator Example output NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 InProgress Drain_Required DrainComplete 3d10h openshift-sriov-network-operator worker-1 InProgress Drain_Required DrainComplete 3d10h When the draining process is complete, the SYNC STATUS changes to Succeeded , and the DESIRED SYNC STATE and CURRENT SYNC STATE values return to IDLE . Example output NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 Succeeded Idle Idle 3d10h openshift-sriov-network-operator worker-1 Succeeded Idle Idle 3d10h 18.3.7. Excluding the SR-IOV network topology for NUMA-aware scheduling To exclude advertising the SR-IOV network resource's Non-Uniform Memory Access (NUMA) node to the Topology Manager, you can configure the excludeTopology specification in the SriovNetworkNodePolicy custom resource. Use this configuration for more flexible SR-IOV network deployments during NUMA-aware pod scheduling. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured the CPU Manager policy to static . For more information about CPU Manager, see the Additional resources section. You have configured the Topology Manager policy to single-numa-node . You have installed the SR-IOV Network Operator. Procedure Create the SriovNetworkNodePolicy CR: Save the following YAML in the sriov-network-node-policy.yaml file, replacing values in the YAML to match your environment: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <policy_name> namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 1 nodeSelector: kubernetes.io/hostname: <node_name> numVfs: <number_of_Vfs> nicSelector: 2 vendor: "<vendor_ID>" deviceID: "<device_ID>" deviceType: netdevice excludeTopology: true 3 1 The resource name of the SR-IOV network device plugin. This YAML uses a sample resourceName value. 2 Identify the device for the Operator to configure by using the NIC selector. 3 To exclude advertising the NUMA node for the SR-IOV network resource to the Topology Manager, set the value to true . The default value is false . Note If multiple SriovNetworkNodePolicy resources target the same SR-IOV network resource, the SriovNetworkNodePolicy resources must have the same value as the excludeTopology specification. Otherwise, the conflicting policy is rejected. Create the SriovNetworkNodePolicy resource by running the following command: USD oc create -f sriov-network-node-policy.yaml Example output sriovnetworknodepolicy.sriovnetwork.openshift.io/policy-for-numa-0 created Create the SriovNetwork CR: Save the following YAML in the sriov-network.yaml file, replacing values in the YAML to match your environment: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-numa-0-network 1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 2 networkNamespace: <namespace> 3 ipam: |- 4 { "type": "<ipam_type>", } 1 Replace sriov-numa-0-network with the name for the SR-IOV network resource. 2 Specify the resource name for the SriovNetworkNodePolicy CR from the step. This YAML uses a sample resourceName value. 3 Enter the namespace for your SR-IOV network resource. 4 Enter the IP address management configuration for the SR-IOV network. Create the SriovNetwork resource by running the following command: USD oc create -f sriov-network.yaml Example output sriovnetwork.sriovnetwork.openshift.io/sriov-numa-0-network created Create a pod and assign the SR-IOV network resource from the step: Save the following YAML in the sriov-network-pod.yaml file, replacing values in the YAML to match your environment: apiVersion: v1 kind: Pod metadata: name: <pod_name> annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "sriov-numa-0-network", 1 } ] spec: containers: - name: <container_name> image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 1 This is the name of the SriovNetwork resource that uses the SriovNetworkNodePolicy resource. Create the Pod resource by running the following command: USD oc create -f sriov-network-pod.yaml Example output pod/example-pod created Verification Verify the status of the pod by running the following command, replacing <pod_name> with the name of the pod: USD oc get pod <pod_name> Example output NAME READY STATUS RESTARTS AGE test-deployment-sriov-76cbbf4756-k9v72 1/1 Running 0 45h Open a debug session with the target pod to verify that the SR-IOV network resources are deployed to a different node than the memory and CPU resources. Open a debug session with the pod by running the following command, replacing <pod_name> with the target pod name. USD oc debug pod/<pod_name> Set /host as the root directory within the debug shell. The debug pod mounts the root file system from the host in /host within the pod. By changing the root directory to /host , you can run binaries from the host file system: USD chroot /host View information about the CPU allocation by running the following commands: USD lscpu | grep NUMA Example output NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,... NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,... USD cat /proc/self/status | grep Cpus Example output Cpus_allowed: aa Cpus_allowed_list: 1,3,5,7 USD cat /sys/class/net/net1/device/numa_node Example output 0 In this example, CPUs 1,3,5, and 7 are allocated to NUMA node1 but the SR-IOV network resource can use the NIC in NUMA node0 . Note If the excludeTopology specification is set to True , it is possible that the required resources exist in the same NUMA node. 18.3.8. Additional resources Configuring an SR-IOV network device Using CPU Manager 18.4. Configuring an SR-IOV InfiniBand network attachment You can configure an InfiniBand (IB) network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . 18.4.1. InfiniBand device configuration object You can configure an InfiniBand (IB) network device by defining an SriovIBNetwork object. The following YAML describes an SriovIBNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovIBNetwork object. Only pods in the target namespace can attach to the network device. 5 Optional: A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: The link state of virtual function (VF). Allowed values are enable , disable and auto . 7 Optional: The capabilities to configure for this network. You can specify '{ "ips": true }' to enable IP address support or '{ "infinibandGUID": true }' to enable IB Global Unique Identifier (GUID) support. 18.4.1.1. Creating a configuration for assignment of dual-stack IP addresses dynamically Dual-stack IP address assignment can be configured with the ipRanges parameter for: IPv4 addresses IPv6 addresses multiple IP address assignment Procedure Set type to whereabouts . Use ipRanges to allocate IP addresses as shown in the following example: cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } } Attach network to a pod. For more information, see "Adding a pod to an additional network". Verify that all IP addresses are assigned. Run the following command to ensure the IP addresses are assigned as metadata. USD oc exec -it mypod -- ip a 18.4.1.2. Configuration of IP address assignment for a network attachment The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 18.4.1.2.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 18.8. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 18.9. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 18.10. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 18.11. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 18.4.1.2.2. Dynamic IP address (DHCP) assignment configuration A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. Important For an Ethernet network attachment, the SR-IOV Network Operator does not create a DHCP server deployment; the Cluster Network Operator is responsible for creating the minimal DHCP server deployment. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... The following table describes the configuration parameters for dynamic IP address address assignment with DHCP. Table 18.12. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. The following JSON example describes the configuration p for dynamic IP address address assignment with DHCP. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 18.4.1.2.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The Whereabouts CNI plugin also supports overlapping IP address ranges and configuration of the same CIDR range multiple times within separate NetworkAttachmentDefinition CRDs. This provides greater flexibility and management capabilities in multi-tenant environments. 18.4.1.2.3.1. Dynamic IP address configuration objects The following table describes the configuration objects for dynamic IP address assignment with Whereabouts: Table 18.13. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. network_name string Optional: Helps ensure that each group or domain of pods gets its own set of IP addresses, even if they share the same range of IP addresses. Setting this field is important for keeping networks separate and organized, notably in multi-tenant environments. 18.4.1.2.3.2. Dynamic IP address assignment configuration that uses Whereabouts The following example shows a dynamic address assignment configuration that uses Whereabouts: Whereabouts dynamic IP address assignment { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 18.4.1.2.3.3. Dynamic IP address assignment that uses Whereabouts with overlapping IP address ranges The following example shows a dynamic IP address assignment that uses overlapping IP address ranges for multi-tenant networks. NetworkAttachmentDefinition 1 { "ipam": { "type": "whereabouts", "range": "192.0.2.192/29", "network_name": "example_net_common", 1 } } 1 Optional. If set, must match the network_name of NetworkAttachmentDefinition 2 . NetworkAttachmentDefinition 2 { "ipam": { "type": "whereabouts", "range": "192.0.2.192/24", "network_name": "example_net_common", 1 } } 1 Optional. If set, must match the network_name of NetworkAttachmentDefinition 1 . 18.4.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovIBNetwork object. When you create an SriovIBNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovIBNetwork object if it is attached to any pods in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovIBNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovIBNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovIBNetwork object. USD oc get net-attach-def -n <namespace> 18.4.3. Runtime configuration for an InfiniBand-based SR-IOV attachment When attaching a pod to an additional network, you can specify a runtime configuration to make specific customizations for the pod. For example, you can request a specific MAC hardware address. You specify the runtime configuration by setting an annotation in the pod specification. The annotation key is k8s.v1.cni.cncf.io/networks , and it accepts a JSON object that describes the runtime configuration. The following JSON describes the runtime configuration options for an InfiniBand-based SR-IOV network attachment. [ { "name": "<network_attachment>", 1 "infiniband-guid": "<guid>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 The InfiniBand GUID for the SR-IOV device. To use this feature, you also must specify { "infinibandGUID": true } in the SriovIBNetwork object. 3 The IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovIBNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "ib1", "infiniband-guid": "c2:11:22:33:44:55:66:77", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 18.4.4. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 18.4.5. Additional resources Configuring an SR-IOV network device Using CPU Manager Exclude SR-IOV network topology for NUMA-aware scheduling 18.5. Configuring interface-level network sysctl settings and all-multicast mode for SR-IOV networks As a cluster administrator, you can change interface-level network sysctls and several interface attributes such as promiscuous mode, all-multicast mode, MTU, and MAC address by using the tuning Container Network Interface (CNI) meta plugin for a pod connected to a SR-IOV network device. Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . 18.5.1. Labeling nodes with an SR-IOV enabled NIC If you want to enable SR-IOV on only SR-IOV capable nodes there are a couple of ways to do this: Install the Node Feature Discovery (NFD) Operator. NFD detects the presence of SR-IOV enabled NICs and labels the nodes with node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = true . Examine the SriovNetworkNodeState CR for each node. The interfaces stanza includes a list of all of the SR-IOV devices discovered by the SR-IOV Network Operator on the worker node. Label each node with feature.node.kubernetes.io/network-sriov.capable: "true" by using the following command: USD oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable="true" Note You can label the nodes with whatever name you want. 18.5.2. Setting one sysctl flag You can set interface-level network sysctl settings for a pod connected to a SR-IOV network device. In this example, net.ipv4.conf.IFNAME.accept_redirects is set to 1 on the created virtual interfaces. The sysctl-tuning-test is a namespace used in this example. Use the following command to create the sysctl-tuning-test namespace: 18.5.2.1. Setting one sysctl flag on nodes with SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io custom resource definition (CRD) to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain and reboot the nodes. It can take several minutes for a configuration change to apply. Follow this procedure to create a SriovNetworkNodePolicy custom resource (CR). Procedure Create an SriovNetworkNodePolicy custom resource (CR). For example, save the following YAML as the file policyoneflag-sriov-node-network.yaml : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyoneflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 nodeSelector: 4 feature.node.kubernetes.io/network-sriov.capable="true" priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: ["ens5"] 8 deviceType: "netdevice" 9 isRdma: false 10 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 The number of the virtual functions (VFs) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 7 The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 8 Optional: An array of one or more physical function (PF) names for the device. 9 Optional: The driver type for the virtual functions. The only allowed value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, set isRdma to true . 10 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. Note The vfio-pci driver type is not supported. Create the SriovNetworkNodePolicy object: USD oc create -f policyoneflag-sriov-node-network.yaml After applying the configuration update, all the pods in sriov-network-operator namespace change to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Example output Succeeded 18.5.2.2. Configuring sysctl on a SR-IOV network You can set interface specific sysctl settings on virtual interfaces created by SR-IOV by adding the tuning configuration to the optional metaPlugins parameter of the SriovNetwork resource. The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To change the interface-level network net.ipv4.conf.IFNAME.accept_redirects sysctl settings, create an additional SR-IOV network with the Container Network Interface (CNI) tuning plugin. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR. Save the YAML as the file sriov-network-interface-sysctl.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: onevalidflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 networkNamespace: sysctl-tuning-test 4 ipam: '{ "type": "static" }' 5 capabilities: '{ "mac": true, "ips": true }' 6 metaPlugins : | 7 { "type": "tuning", "capabilities":{ "mac":true }, "sysctl":{ "net.ipv4.conf.IFNAME.accept_redirects": "1" } } 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: Set capabilities for the additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. 7 Optional: The metaPlugins parameter is used to add additional capabilities to the device. In this use case set the type field to tuning . Specify the interface-level network sysctl you want to set in the sysctl field. Create the SriovNetwork resource: USD oc create -f sriov-network-interface-sysctl.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the value for networkNamespace that you specified in the SriovNetwork object. For example, sysctl-tuning-test . Example output NAME AGE onevalidflag 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network attachment is successful To verify that the tuning CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create a Pod CR. Save the following YAML as the file examplepod.yaml : apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "onevalidflag", 1 "mac": "0a:56:0a:83:04:0c", 2 "ips": ["10.100.100.200/24"] 3 } ] spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that are allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Create the Pod CR: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod -n sysctl-tuning-test Example output NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh -n sysctl-tuning-test tunepod Verify the values of the configured sysctl flag. Find the value net.ipv4.conf.IFNAME.accept_redirects by running the following command:: USD sysctl net.ipv4.conf.net1.accept_redirects Example output net.ipv4.conf.net1.accept_redirects = 1 18.5.3. Configuring sysctl settings for pods associated with bonded SR-IOV interface flag You can set interface-level network sysctl settings for a pod connected to a bonded SR-IOV network device. In this example, the specific network interface-level sysctl settings that can be configured are set on the bonded interface. The sysctl-tuning-test is a namespace used in this example. Use the following command to create the sysctl-tuning-test namespace: 18.5.3.1. Setting all sysctl flag on nodes with bonded SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io custom resource definition (CRD) to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Follow this procedure to create a SriovNetworkNodePolicy custom resource (CR). Procedure Create an SriovNetworkNodePolicy custom resource (CR). Save the following YAML as the file policyallflags-sriov-node-network.yaml . Replace policyallflags with the name for the configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyallflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 nodeSelector: 4 node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = `true` priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: ["ens1f0"] 8 deviceType: "netdevice" 9 isRdma: false 10 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 The number of virtual functions (VFs) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 7 The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 8 Optional: An array of one or more physical function (PF) names for the device. 9 Optional: The driver type for the virtual functions. The only allowed value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, set isRdma to true . 10 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. Note The vfio-pci driver type is not supported. Create the SriovNetworkNodePolicy object: USD oc create -f policyallflags-sriov-node-network.yaml After applying the configuration update, all the pods in sriov-network-operator namespace change to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Example output Succeeded 18.5.3.2. Configuring sysctl on a bonded SR-IOV network You can set interface specific sysctl settings on a bonded interface created from two SR-IOV interfaces. Do this by adding the tuning configuration to the optional Plugins parameter of the bond network attachment definition. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To change specific interface-level network sysctl settings create the SriovNetwork custom resource (CR) with the Container Network Interface (CNI) tuning plugin by using the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the bonded interface as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: allvalidflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 networkNamespace: sysctl-tuning-test 4 capabilities: '{ "mac": true, "ips": true }' 5 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Optional: The capabilities to configure for this additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. Create the SriovNetwork resource: USD oc create -f sriov-network-attachment.yaml Create a bond network attachment definition as in the following example CR. Save the YAML as the file sriov-bond-network-interface.yaml . apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bond-sysctl-network namespace: sysctl-tuning-test spec: config: '{ "cniVersion":"0.4.0", "name":"bound-net", "plugins":[ { "type":"bond", 1 "mode": "active-backup", 2 "failOverMac": 1, 3 "linksInContainer": true, 4 "miimon": "100", "links": [ 5 {"name": "net1"}, {"name": "net2"} ], "ipam":{ 6 "type":"static" } }, { "type":"tuning", 7 "capabilities":{ "mac":true }, "sysctl":{ "net.ipv4.conf.IFNAME.accept_redirects": "0", "net.ipv4.conf.IFNAME.accept_source_route": "0", "net.ipv4.conf.IFNAME.disable_policy": "1", "net.ipv4.conf.IFNAME.secure_redirects": "0", "net.ipv4.conf.IFNAME.send_redirects": "0", "net.ipv6.conf.IFNAME.accept_redirects": "0", "net.ipv6.conf.IFNAME.accept_source_route": "1", "net.ipv6.neigh.IFNAME.base_reachable_time_ms": "20000", "net.ipv6.neigh.IFNAME.retrans_time_ms": "2000" } } ] }' 1 The type is bond . 2 The mode attribute specifies the bonding mode. The bonding modes supported are: balance-rr - 0 active-backup - 1 balance-xor - 2 For balance-rr or balance-xor modes, you must set the trust mode to on for the SR-IOV virtual function. 3 The failover attribute is mandatory for active-backup mode. 4 The linksInContainer=true flag informs the Bond CNI that the required interfaces are to be found inside the container. By default, Bond CNI looks for these interfaces on the host which does not work for integration with SRIOV and Multus. 5 The links section defines which interfaces will be used to create the bond. By default, Multus names the attached interfaces as: "net", plus a consecutive number, starting with one. 6 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. In this pod example IP addresses are configured manually, so in this case, ipam is set to static. 7 Add additional capabilities to the device. For example, set the type field to tuning . Specify the interface-level network sysctl you want to set in the sysctl field. This example sets all interface-level network sysctl settings that can be set. Create the bond network attachment resource: USD oc create -f sriov-bond-network-interface.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the networkNamespace that you specified when configuring the network attachment, for example, sysctl-tuning-test . Example output NAME AGE bond-sysctl-network 22m allvalidflags 47m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network resource is successful To verify that the tuning CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create a Pod CR. For example, save the following YAML as the file examplepod.yaml : apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ {"name": "allvalidflags"}, 1 {"name": "allvalidflags"}, { "name": "bond-sysctl-network", "interface": "bond0", "mac": "0a:56:0a:83:04:0c", 2 "ips": ["10.100.100.200/24"] 3 } ] spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that are allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Apply the YAML: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod -n sysctl-tuning-test Example output NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh -n sysctl-tuning-test tunepod Verify the values of the configured sysctl flag. Find the value net.ipv6.neigh.IFNAME.base_reachable_time_ms by running the following command:: USD sysctl net.ipv6.neigh.bond0.base_reachable_time_ms Example output net.ipv6.neigh.bond0.base_reachable_time_ms = 20000 18.5.4. About all-multicast mode Enabling all-multicast mode, particularly in the context of rootless applications, is critical. If you do not enable this mode, you would be required to grant the NET_ADMIN capability to the pod's Security Context Constraints (SCC). If you were to allow the NET_ADMIN capability to grant the pod privileges to make changes that extend beyond its specific requirements, you could potentially expose security vulnerabilities. The tuning CNI plugin supports changing several interface attributes, including all-multicast mode. By enabling this mode, you can allow applications running on Virtual Functions (VFs) that are configured on a SR-IOV network device to receive multicast traffic from applications on other VFs, whether attached to the same or different physical functions. 18.5.4.1. Enabling the all-multicast mode on an SR-IOV network You can enable the all-multicast mode on an SR-IOV interface by: Adding the tuning configuration to the metaPlugins parameter of the SriovNetwork resource Setting the allmulti field to true in the tuning configuration Note Ensure that you create the virtual function (VF) with trust enabled. The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. Enable the all-multicast mode on a SR-IOV network by following this guidance. Prerequisites You have installed the OpenShift Container Platform CLI (oc). You are logged in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. You have installed the SR-IOV Network Operator. You have configured an appropriate SriovNetworkNodePolicy object. Procedure Create a YAML file with the following settings that defines a SriovNetworkNodePolicy object for a Mellanox ConnectX-5 device. Save the YAML file as sriovnetpolicy-mlx.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-mlx namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: deviceID: "1017" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: "15b3" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 10 priority: 99 resourceName: resourcemlx Optional: If the SR-IOV capable cluster nodes are not already labeled, add the SriovNetworkNodePolicy.Spec.NodeSelector label. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f sriovnetpolicy-mlx.yaml After applying the configuration update, all the pods in the sriov-network-operator namespace automatically move to a Running status. Create the enable-allmulti-test namespace by running the following command: USD oc create namespace enable-allmulti-test Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR YAML, and save the file as sriov-enable-all-multicast.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: enableallmulti 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: enableallmulti 3 networkNamespace: enable-allmulti-test 4 ipam: '{ "type": "static" }' 5 capabilities: '{ "mac": true, "ips": true }' 6 trust: "on" 7 metaPlugins : | 8 { "type": "tuning", "capabilities":{ "mac":true }, "allmulti": true } } 1 Specify a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with the same name. 2 Specify the namespace where the SR-IOV Network Operator is installed. 3 Specify a value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 Specify the target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Specify a configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: Set capabilities for the additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. 7 Specify the trust mode of the virtual function. This must be set to "on". 8 Add more capabilities to the device by using the metaPlugins parameter. In this use case, set the type field to tuning , and add the allmulti field and set it to true . Create the SriovNetwork resource by running the following command: USD oc create -f sriov-enable-all-multicast.yaml Verification of the NetworkAttachmentDefinition CR Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the value for networkNamespace that you specified in the SriovNetwork object. For this example, that is enable-allmulti-test . Example output NAME AGE enableallmulti 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Display information about the SR-IOV network resources by running the following command: USD oc get sriovnetwork -n openshift-sriov-network-operator Verification of the additional SR-IOV network attachment To verify that the tuning CNI is correctly configured and that the additional SR-IOV network attachment is attached, follow these steps: Create a Pod CR. Save the following sample YAML in a file named examplepod.yaml : apiVersion: v1 kind: Pod metadata: name: samplepod namespace: enable-allmulti-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "enableallmulti", 1 "mac": "0a:56:0a:83:04:0c", 2 "ips": ["10.100.100.200/24"] 3 } ] spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault 1 Specify the name of the SR-IOV network attachment definition CR. 2 Optional: Specify the MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify {"mac": true} in the SriovNetwork object. 3 Optional: Specify the IP addresses for the SR-IOV device that are allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Create the Pod CR by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod -n enable-allmulti-test Example output NAME READY STATUS RESTARTS AGE samplepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh -n enable-allmulti-test samplepod List all the interfaces associated with the pod by running the following command: sh-4.4# ip link Example output 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2 1 eth0@if22 is the primary interface 2 net1@if24 is the secondary interface configured with the network-attachment-definition that supports the all-multicast mode ( ALLMULTI flag) 18.6. Configuring QinQ support for SR-IOV enabled workloads QinQ, formally known as 802.1Q-in-802.1Q, is a networking technique defined by IEEE 802.1ad. IEEE 802.1ad extends the IEEE 802.1Q-1998 standard and enriches VLAN capabilities by introducing an additional 802.1Q tag to packets already tagged with 802.1Q. This method is also referred to as VLAN stacking or double VLAN. Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . 18.6.1. About 802.1Q-in-802.1Q support In traditional VLAN setups, frames typically contain a single VLAN tag, such as VLAN-100, as well as other metadata such as Quality of Service (QoS) bits and protocol information. QinQ introduces a second VLAN tag, where the service provider designates the outer tag for their use, offering them flexibility, while the inner tag remains dedicated to the customer's VLAN. QinQ facilitates the creation of nested VLANs by using double VLAN tagging, enabling finer segmentation and isolation of traffic within a network environment. This approach is particularly valuable in service provider networks where you need to deliver VLAN-based services to multiple customers over a common infrastructure, while ensuring separation and isolation of traffic. The following diagram illustrates how OpenShift Container Platform can use SR-IOV and QinQ to achieve advanced network segmentation and isolation for containerized workloads. The diagram shows how double VLAN tagging (QinQ) works in a worker node with SR-IOV support. The SR-IOV virtual function (VF) located in the pod namespace, ext0 is configured by the SR-IOV Container Network Interface (CNI) with a VLAN ID and VLAN protocol. This corresponds to the S-tag. Inside the pod, the VLAN CNI creates a subinterface using the primary interface ext0 . This subinterface adds an internal VLAN ID using the 802.1Q protocol, which corresponds to the C-tag. This demonstrates how QinQ enables finer traffic segmentation and isolation within the network. The Ethernet frame structure is detailed on the right, highlighting the inclusion of both VLAN tags, EtherType, IP, TCP, and Payload sections. QinQ facilitates the delivery of VLAN-based services to multiple customers over a shared infrastructure while ensuring traffic separation and isolation. The OpenShift Container Platform SR-IOV solution already supports setting the VLAN protocol on the SriovNetwork custom resource (CR). The virtual function (VF) can use this protocol to set the VLAN tag, also known as the outer tag. Pods can then use the VLAN CNI plugin to configure the inner tag. Table 18.14. Supported network interface cards NIC 802.1ad/802.1Q 802.1Q/802.1Q Intel X710 No Supported Intel E810 Supported Supported Mellanox No Supported Additional resources Configuration for an VLAN additional network 18.6.2. Configuring QinQ support for SR-IOV enabled workloads Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. Procedure Create a file named sriovnetpolicy-810-sriov-node-network.yaml by using the following content: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-810 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: - ens5f0#0-9 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 10 priority: 99 resourceName: resource810 Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f sriovnetpolicy-810-sriov-node-network.yaml Open a separate terminal window and monitor the synchronization status of the SR-IOV network node state for the node specified in the openshift-sriov-network-operator namespace by running the following command: USD watch -n 1 'oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath="{.status.syncStatus}"' The synchronization status indicates a change from InProgress to Succeeded . Create a SriovNetwork object, and set the outer VLAN called the S-tag, or Service Tag , as it belongs to the infrastructure. Important You must configure the VLAN on the trunk interface of the switch. In addition, you might need to further configure some switches to support QinQ tagging. Create a file named nad-sriovnetwork-1ad-810.yaml by using the following content: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriovnetwork-1ad-810 namespace: openshift-sriov-network-operator spec: ipam: '{}' vlan: 171 1 vlanProto: "802.1ad" 2 networkNamespace: default resourceName: resource810 1 Sets the S-tag VLAN tag to 171 . 2 Specifies the VLAN protocol to assign to the virtual function (VF). Supported values are 802.1ad and 802.1q . The default value is 802.1q . Create the object by running the following command: USD oc create -f nad-sriovnetwork-1ad-810.yaml Create a NetworkAttachmentDefinition object with an inner VLAN. The inner VLAN is often referred to as the C-tag, or Customer Tag , as it belongs to the Network Function: Create a file named nad-cvlan100.yaml by using the following content: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: nad-cvlan100 namespace: default spec: config: '{ "name": "vlan-100", "cniVersion": "0.3.1", "type": "vlan", "linkInContainer": true, "master": "net1", 1 "vlanId": 100, "ipam": {"type": "static"} }' 1 Specifies the VF interface inside the pod. The default name is net1 as the name is not set in the pod annotation. Apply the YAML file by running the following command: USD oc apply -f nad-cvlan100.yaml Verification Verify QinQ is active on the node by following this procedure: Create a file named test-qinq-pod.yaml by using the following content: apiVersion: v1 kind: Pod metadata: name: test-pod annotations: k8s.v1.cni.cncf.io/networks: sriovnetwork-1ad-810, nad-cvlan100 spec: containers: - name: test-container image: quay.io/ocp-edge-qe/cnf-gotests-client:v4.10 imagePullPolicy: Always securityContext: privileged: true Create the test pod by running the following command: USD oc create -f test-qinq-pod.yaml Enter into a debug session on the target node where the pod is present and display information about the network interface ens5f0 by running the following command: USD oc debug node/my-cluster-node -- bash -c "ip link show ens5f0" Example output 6: ens5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:a5:22:10 brd ff:ff:ff:ff:ff:ff vf 0 link/ether a2:81:ba:d0:6f:f3 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 8a:bb:0a:36:f2:ed brd ff:ff:ff:ff:ff:ff, vlan 171, vlan protocol 802.1ad, spoof checking on, link-state auto, trust off vf 2 link/ether ca:0e:e1:5b:0c:d2 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether ee:6c:e2:f5:2c:70 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether 0a:d6:b7:66:5e:e8 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 5 link/ether da:d5:e7:14:4f:aa brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 6 link/ether d6:8e:85:75:12:5c brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 7 link/ether d6:eb:ce:9c:ea:78 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 8 link/ether 5e:c5:cc:05:93:3c brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust on vf 9 link/ether a6:5a:7c:1c:2a:16 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off The vlan protocol 802.1ad ID in the output indicates that the interface supports VLAN tagging with protocol 802.1ad (QinQ). The VLAN ID is 171. 18.7. Using high performance multicast You can use multicast on your Single Root I/O Virtualization (SR-IOV) hardware network. Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . 18.7.1. High performance multicast The OpenShift SDN network plugin supports multicast between pods on the default network. This is best used for low-bandwidth coordination or service discovery, and not high-bandwidth applications. For applications such as streaming media, like Internet Protocol television (IPTV) and multipoint videoconferencing, you can utilize Single Root I/O Virtualization (SR-IOV) hardware to provide near-native performance. When using additional SR-IOV interfaces for multicast: Multicast packages must be sent or received by a pod through the additional SR-IOV interface. The physical network which connects the SR-IOV interfaces decides the multicast routing and topology, which is not controlled by OpenShift Container Platform. 18.7.2. Configuring an SR-IOV interface for multicast The follow procedure creates an example SR-IOV interface for multicast. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Create a SriovNetworkNodePolicy object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 4 nicSelector: vendor: "8086" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0'] Create a SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { "type": "host-local", 2 "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [ {"dst": "224.0.0.0/5"}, {"dst": "232.0.0.0/5"} ], "gateway": "10.56.217.1" } resourceName: example 1 2 If you choose to configure DHCP as IPAM, ensure that you provision the following default routes through your DHCP server: 224.0.0.0/5 and 232.0.0.0/5 . This is to override the static multicast route set by the default network provider. Create a pod with multicast application: apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: ["NET_ADMIN"] 1 command: [ "sleep", "infinity"] 1 The NET_ADMIN capability is required only if your application needs to assign the multicast IP address to the SR-IOV interface. Otherwise, it can be omitted. 18.8. Using DPDK and RDMA The containerized Data Plane Development Kit (DPDK) application is supported on OpenShift Container Platform. You can use Single Root I/O Virtualization (SR-IOV) network hardware with the Data Plane Development Kit (DPDK) and with remote direct memory access (RDMA). Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . 18.8.1. Example use of a virtual function in a pod You can run a remote direct memory access (RDMA) or a Data Plane Development Kit (DPDK) application in a pod with SR-IOV VF attached. This example shows a pod using a virtual function (VF) in RDMA mode: Pod spec that uses RDMA mode apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] command: ["sleep", "infinity"] The following example shows a pod with a VF in DPDK mode: Pod spec that uses DPDK mode apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" requests: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 18.8.2. Using a virtual function in DPDK mode with an Intel NIC Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the intel-dpdk-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "8086" deviceID: "158b" pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: vfio-pci 1 1 Specify the driver type for the virtual functions to vfio-pci . Note See the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f intel-dpdk-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the intel-dpdk-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- # ... 1 vlan: <vlan> resourceName: intelnics 1 Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil, provides several API methods for gathering network information about a container's parent pod. Create the SriovNetwork object by running the following command: USD oc create -f intel-dpdk-network.yaml Create the following Pod spec, and then save the YAML in the intel-dpdk-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/intelnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/intelnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where the SriovNetwork object intel-dpdk-network is created. If you would like to create the pod in a different namespace, change target_namespace in both the Pod spec and the SriovNetwork object. 2 Specify the DPDK image which includes your application and the DPDK library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount a hugepage volume to the DPDK pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated to DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments default_hugepagesz=1GB , hugepagesz=1G and hugepages=16 will result in 16*1Gi hugepages be allocated during system boot. Create the DPDK pod by running the following command: USD oc create -f intel-dpdk-pod.yaml 18.8.3. Using a virtual function in DPDK mode with a Mellanox NIC You can create a network node policy and create a Data Plane Development Kit (DPDK) pod using a virtual function in DPDK mode with a Mellanox NIC. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the Single Root I/O Virtualization (SR-IOV) Network Operator. You have logged in as a user with cluster-admin privileges. Procedure Save the following SriovNetworkNodePolicy YAML configuration to an mlx-dpdk-node-policy.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of the SR-IOV network device. 2 Specify the driver type for the virtual functions to netdevice . A Mellanox SR-IOV Virtual Function (VF) can work in DPDK mode without using the vfio-pci device type. The VF device appears as a kernel network interface inside a container. 3 Enable Remote Direct Memory Access (RDMA) mode. This is required for Mellanox cards to work in DPDK mode. Note See Configuring an SR-IOV network device for a detailed explanation of each option in the SriovNetworkNodePolicy object. When applying the configuration specified in an SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-dpdk-node-policy.yaml Save the following SriovNetwork YAML configuration to an mlx-dpdk-network.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the IP Address Management (IPAM) Container Network Interface (CNI) plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See Configuring an SR-IOV network device for a detailed explanation on each option in the SriovNetwork object. The app-netutil option library provides several API methods for gathering network information about the parent pod of a container. Create the SriovNetwork object by running the following command: USD oc create -f mlx-dpdk-network.yaml Save the following Pod YAML configuration to an mlx-dpdk-pod.yaml file: apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/mlxnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/mlxnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-dpdk-network is created. To create the pod in a different namespace, change target_namespace in both the Pod spec and SriovNetwork object. 2 Specify the DPDK image which includes your application and the DPDK library used by the application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to the DPDK pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated for the DPDK pod. If not explicitly specified, this resource request and limit is automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires that exclusive CPUs be allocated from the kubelet. To do this, set the CPU Manager policy to static and create a pod with Guaranteed Quality of Service (QoS). 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepages requires adding kernel arguments to Nodes. Create the DPDK pod by running the following command: USD oc create -f mlx-dpdk-pod.yaml 18.8.4. Using the TAP CNI to run a rootless DPDK workload with kernel access DPDK applications can use virtio-user as an exception path to inject certain types of packets, such as log messages, into the kernel for processing. For more information about this feature, see Virtio_user as Exception Path . In OpenShift Container Platform version 4.14 and later, you can use non-privileged pods to run DPDK applications alongside the tap CNI plugin. To enable this functionality, you need to mount the vhost-net device by setting the needVhostNet parameter to true within the SriovNetworkNodePolicy object. Figure 18.1. DPDK and TAP example configuration Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the SR-IOV Network Operator. You are logged in as a user with cluster-admin privileges. Ensure that setsebools container_use_devices=on is set as root on all nodes. Note Use the Machine Config Operator to set this SELinux boolean. Procedure Create a file, such as test-namespace.yaml , with content like the following example: apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" Create the new Namespace object by running the following command: USD oc apply -f test-namespace.yaml Create a file, such as sriov-node-network-policy.yaml , with content like the following example:: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 needVhostNet: true 3 nicSelector: vendor: "15b3" 4 deviceID: "101b" 5 rootDevices: ["00:05.0"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 1 This indicates that the profile is tailored specifically for Mellanox Network Interface Controllers (NICs). 2 Setting isRdma to true is only required for a Mellanox NIC. 3 This mounts the /dev/net/tun and /dev/vhost-net devices into the container so the application can create a tap device and connect the tap device to the DPDK workload. 4 The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC. 5 The device hexadecimal code of the SR-IOV network device. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f sriov-node-network-policy.yaml Create the following SriovNetwork object, and then save the YAML in the sriov-network-attachment.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: "off" trust: "on" Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil , provides several API methods for gathering network information about a container's parent pod. Create the SriovNetwork object by running the following command: USD oc create -f sriov-network-attachment.yaml Create a file, such as tap-example.yaml , that defines a network attachment definition, with content like the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: tap-one namespace: test-namespace 1 spec: config: '{ "cniVersion": "0.4.0", "name": "tap", "plugins": [ { "type": "tap", "multiQueue": true, "selinuxcontext": "system_u:system_r:container_t:s0" }, { "type":"tuning", "capabilities":{ "mac":true } } ] }' 1 Specify the same target_namespace where the SriovNetwork object is created. Create the NetworkAttachmentDefinition object by running the following command: USD oc apply -f tap-example.yaml Create a file, such as dpdk-pod-rootless.yaml , with content like the following example: apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: test-namespace 1 annotations: k8s.v1.cni.cncf.io/networks: '[ {"name": "sriov-network", "namespace": "test-namespace"}, {"name": "tap-one", "interface": "ext0", "namespace": "test-namespace"}]' spec: nodeSelector: kubernetes.io/hostname: "worker-0" securityContext: fsGroup: 1001 2 runAsGroup: 1001 3 seccompProfile: type: RuntimeDefault containers: - name: testpmd image: <DPDK_image> 4 securityContext: capabilities: drop: ["ALL"] 5 add: 6 - IPC_LOCK - NET_RAW #for mlx only 7 runAsUser: 1001 8 privileged: false 9 allowPrivilegeEscalation: true 10 runAsNonRoot: true 11 volumeMounts: - mountPath: /mnt/huge 12 name: hugepages resources: limits: openshift.io/sriovnic: "1" 13 memory: "1Gi" cpu: "4" 14 hugepages-1Gi: "4Gi" 15 requests: openshift.io/sriovnic: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] runtimeClassName: performance-cnf-performanceprofile 16 volumes: - name: hugepages emptyDir: medium: HugePages 1 Specify the same target_namespace in which the SriovNetwork object is created. If you want to create the pod in a different namespace, change target_namespace in both the Pod spec and the SriovNetwork object. 2 Sets the group ownership of volume-mounted directories and files created in those volumes. 3 Specify the primary group ID used for running the container. 4 Specify the DPDK image that contains your application and the DPDK library used by application. 5 Removing all capabilities ( ALL ) from the container's securityContext means that the container has no special privileges beyond what is necessary for normal operation. 6 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. These capabilities must also be set in the binary file by using the setcap command. 7 Mellanox network interface controller (NIC) requires the NET_RAW capability. 8 Specify the user ID used for running the container. 9 This setting indicates that the container or containers within the pod should not be granted privileged access to the host system. 10 This setting allows a container to escalate its privileges beyond the initial non-root privileges it might have been assigned. 11 This setting ensures that the container runs with a non-root user. This helps enforce the principle of least privilege, limiting the potential impact of compromising the container and reducing the attack surface. 12 Mount a hugepage volume to the DPDK pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 13 Optional: Specify the number of DPDK devices allocated for the DPDK pod. If not explicitly specified, this resource request and limit is automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the enableInjector option to false in the default SriovOperatorConfig CR. 14 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 15 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments default_hugepagesz=1GB , hugepagesz=1G and hugepages=16 will result in 16*1Gi hugepages be allocated during system boot. 16 If your performance profile is not named cnf-performance profile , replace that string with the correct performance profile name. Create the DPDK pod by running the following command: USD oc create -f dpdk-pod-rootless.yaml Additional resources Enabling the container_use_devices boolean Creating a performance profile Configuring an SR-IOV network device 18.8.5. Overview of achieving a specific DPDK line rate To achieve a specific Data Plane Development Kit (DPDK) line rate, deploy a Node Tuning Operator and configure Single Root I/O Virtualization (SR-IOV). You must also tune the DPDK settings for the following resources: Isolated CPUs Hugepages The topology scheduler Note In versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. DPDK test environment The following diagram shows the components of a traffic-testing environment: Traffic generator : An application that can generate high-volume packet traffic. SR-IOV-supporting NIC : A network interface card compatible with SR-IOV. The card runs a number of virtual functions on a physical interface. Physical Function (PF) : A PCI Express (PCIe) function of a network adapter that supports the SR-IOV interface. Virtual Function (VF) : A lightweight PCIe function on a network adapter that supports SR-IOV. The VF is associated with the PCIe PF on the network adapter. The VF represents a virtualized instance of the network adapter. Switch : A network switch. Nodes can also be connected back-to-back. testpmd : An example application included with DPDK. The testpmd application can be used to test the DPDK in a packet-forwarding mode. The testpmd application is also an example of how to build a fully-fledged application using the DPDK Software Development Kit (SDK). worker 0 and worker 1 : OpenShift Container Platform nodes. 18.8.6. Using SR-IOV and the Node Tuning Operator to achieve a DPDK line rate You can use the Node Tuning Operator to configure isolated CPUs, hugepages, and a topology scheduler. You can then use the Node Tuning Operator with Single Root I/O Virtualization (SR-IOV) to achieve a specific Data Plane Development Kit (DPDK) line rate. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the SR-IOV Network Operator. You have logged in as a user with cluster-admin privileges. You have deployed a standalone Node Tuning Operator. Note In versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. Procedure Create a PerformanceProfile object based on the following example: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: globallyDisableIrqLoadBalancing: true cpu: isolated: 21-51,73-103 1 reserved: 0-20,52-72 2 hugepages: defaultHugepagesSize: 1G 3 pages: - count: 32 size: 1G net: userLevelNetworking: true numa: topologyPolicy: "single-numa-node" nodeSelector: node-role.kubernetes.io/worker-cnf: "" 1 If hyperthreading is enabled on the system, allocate the relevant symbolic links to the isolated and reserved CPU groups. If the system contains multiple non-uniform memory access nodes (NUMAs), allocate CPUs from both NUMAs to both groups. You can also use the Performance Profile Creator for this task. For more information, see Creating a performance profile . 2 You can also specify a list of devices that will have their queues set to the reserved CPU count. For more information, see Reducing NIC queues using the Node Tuning Operator . 3 Allocate the number and size of hugepages needed. You can specify the NUMA configuration for the hugepages. By default, the system allocates an even number to every NUMA node on the system. If needed, you can request the use of a realtime kernel for the nodes. See Provisioning a worker with real-time capabilities for more information. Save the yaml file as mlx-dpdk-perfprofile-policy.yaml . Apply the performance profile using the following command: USD oc create -f mlx-dpdk-perfprofile-policy.yaml 18.8.6.1. DPDK library for use with container applications An optional library , app-netutil , provides several API methods for gathering network information about a pod from within a container running within that pod. This library can assist with integrating SR-IOV virtual functions (VFs) in Data Plane Development Kit (DPDK) mode into the container. The library provides both a Golang API and a C API. Currently there are three API methods implemented: GetCPUInfo() This function determines which CPUs are available to the container and returns the list. GetHugepages() This function determines the amount of huge page memory requested in the Pod spec for each container and returns the values. GetInterfaces() This function determines the set of interfaces in the container and returns the list. The return value includes the interface type and type-specific data for each interface. The repository for the library includes a sample Dockerfile to build a container image, dpdk-app-centos . The container image can run one of the following DPDK sample applications, depending on an environment variable in the pod specification: l2fwd , l3wd or testpmd . The container image provides an example of integrating the app-netutil library into the container image itself. The library can also integrate into an init container. The init container can collect the required data and pass the data to an existing DPDK workload. 18.8.6.2. Example SR-IOV Network Operator for virtual functions You can use the Single Root I/O Virtualization (SR-IOV) Network Operator to allocate and configure Virtual Functions (VFs) from SR-IOV-supporting Physical Function NICs on the nodes. For more information on deploying the Operator, see Installing the SR-IOV Network Operator . For more information on configuring an SR-IOV network device, see Configuring an SR-IOV network device . There are some differences between running Data Plane Development Kit (DPDK) workloads on Intel VFs and Mellanox VFs. This section provides object configuration examples for both VF types. The following is an example of an sriovNetworkNodePolicy object used to run DPDK applications on Intel NICs: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 1 needVhostNet: true 2 nicSelector: pfNames: ["ens3f0"] nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 10 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci needVhostNet: true nicSelector: pfNames: ["ens3f1"] nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 10 priority: 99 resourceName: dpdk_nic_2 1 For Intel NICs, deviceType must be vfio-pci . 2 If kernel communication with DPDK workloads is required, add needVhostNet: true . This mounts the /dev/net/tun and /dev/vhost-net devices into the container so the application can create a tap device and connect the tap device to the DPDK workload. The following is an example of an sriovNetworkNodePolicy object for Mellanox NICs: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 nicSelector: rootDevices: - "0000:5e:00.1" nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 5 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-2 namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: rootDevices: - "0000:5e:00.0" nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 5 priority: 99 resourceName: dpdk_nic_2 1 For Mellanox devices the deviceType must be netdevice . 2 For Mellanox devices isRdma must be true . Mellanox cards are connected to DPDK applications using Flow Bifurcation. This mechanism splits traffic between Linux user space and kernel space, and can enhance line rate processing capability. 18.8.6.3. Example SR-IOV network operator The following is an example definition of an sriovNetwork object. In this case, Intel and Mellanox configurations are identical: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-1 namespace: openshift-sriov-network-operator spec: ipam: '{"type": "host-local","ranges": [[{"subnet": "10.0.1.0/24"}]],"dataDir": "/run/my-orchestrator/container-ipam-state-1"}' 1 networkNamespace: dpdk-test 2 spoofChk: "off" trust: "on" resourceName: dpdk_nic_1 3 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-2 namespace: openshift-sriov-network-operator spec: ipam: '{"type": "host-local","ranges": [[{"subnet": "10.0.2.0/24"}]],"dataDir": "/run/my-orchestrator/container-ipam-state-1"}' networkNamespace: dpdk-test spoofChk: "off" trust: "on" resourceName: dpdk_nic_2 1 You can use a different IP Address Management (IPAM) implementation, such as Whereabouts. For more information, see Dynamic IP address assignment configuration with Whereabouts . 2 You must request the networkNamespace where the network attachment definition will be created. You must create the sriovNetwork CR under the openshift-sriov-network-operator namespace. 3 The resourceName value must match that of the resourceName created under the sriovNetworkNodePolicy . 18.8.6.4. Example DPDK base workload The following is an example of a Data Plane Development Kit (DPDK) container: apiVersion: v1 kind: Namespace metadata: name: dpdk-test --- apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ 1 { "name": "dpdk-network-1", "namespace": "dpdk-test" }, { "name": "dpdk-network-2", "namespace": "dpdk-test" } ]' irq-load-balancing.crio.io: "disable" 2 cpu-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" labels: app: dpdk name: testpmd namespace: dpdk-test spec: runtimeClassName: performance-performance 3 containers: - command: - /bin/bash - -c - sleep INF image: registry.redhat.io/openshift4/dpdk-base-rhel8 imagePullPolicy: Always name: dpdk resources: 4 limits: cpu: "16" hugepages-1Gi: 8Gi memory: 2Gi requests: cpu: "16" hugepages-1Gi: 8Gi memory: 2Gi securityContext: capabilities: add: - IPC_LOCK - SYS_RESOURCE - NET_RAW - NET_ADMIN runAsUser: 0 volumeMounts: - mountPath: /mnt/huge name: hugepages terminationGracePeriodSeconds: 5 volumes: - emptyDir: medium: HugePages name: hugepages 1 Request the SR-IOV networks you need. Resources for the devices will be injected automatically. 2 Disable the CPU and IRQ load balancing base. See Disabling interrupt processing for individual pods for more information. 3 Set the runtimeClass to performance-performance . Do not set the runtimeClass to HostNetwork or privileged . 4 Request an equal number of resources for requests and limits to start the pod with Guaranteed Quality of Service (QoS). Note Do not start the pod with SLEEP and then exec into the pod to start the testpmd or the DPDK workload. This can add additional interrupts as the exec process is not pinned to any CPU. 18.8.6.5. Example testpmd script The following is an example script for running testpmd : #!/bin/bash set -ex export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) echo USD{CPU} dpdk-testpmd -l USD{CPU} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_1} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_2} -n 4 -- -i --nb-cores=15 --rxd=4096 --txd=4096 --rxq=7 --txq=7 --forward-mode=mac --eth-peer=0,50:00:00:00:00:01 --eth-peer=1,50:00:00:00:00:02 This example uses two different sriovNetwork CRs. The environment variable contains the Virtual Function (VF) PCI address that was allocated for the pod. If you use the same network in the pod definition, you must split the pciAddress . It is important to configure the correct MAC addresses of the traffic generator. This example uses custom MAC addresses. 18.8.7. Using a virtual function in RDMA mode with a Mellanox NIC Important RDMA over Converged Ethernet (RoCE) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . RDMA over Converged Ethernet (RoCE) is the only supported mode when using RDMA on OpenShift Container Platform. Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the mlx-rdma-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of the SR-IOV network device. 2 Specify the driver type for the virtual functions to netdevice . 3 Enable RDMA mode. Note See the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the mlx-rdma-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 # ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil, provides several API methods for gathering network information about a container's parent pod. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-network.yaml Create the following Pod spec, and then save the YAML in the mlx-rdma-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: memory: "1Gi" cpu: "4" 5 hugepages-1Gi: "4Gi" 6 requests: memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-rdma-network is created. If you would like to create the pod in a different namespace, change target_namespace in both Pod spec and SriovNetwork object. 2 Specify the RDMA image which includes your application and RDMA library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to RDMA pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Specify number of CPUs. The RDMA pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and create pod with Guaranteed QoS. 6 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the RDMA pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. Create the RDMA pod by running the following command: USD oc create -f mlx-rdma-pod.yaml 18.8.8. A test pod template for clusters that use OVS-DPDK on OpenStack The following testpmd pod demonstrates container creation with huge pages, reserved CPUs, and the SR-IOV port. An example testpmd pod apiVersion: v1 kind: Pod metadata: name: testpmd-dpdk namespace: mynamespace annotations: cpu-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" # ... spec: containers: - name: testpmd command: ["sleep", "99999"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: ["IPC_LOCK","SYS_ADMIN"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/dpdk1: 1 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/dpdk1: 1 volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 2 volumes: - name: hugepage emptyDir: medium: HugePages 1 The name dpdk1 in this example is a user-created SriovNetworkNodePolicy resource. You can substitute this name for that of a resource that you create. 2 If your performance profile is not named cnf-performance profile , replace that string with the correct performance profile name. 18.8.9. Additional resources Supported devices Creating a performance profile Adjusting the NIC queues with the performance profile Provisioning real-time and low latency workloads Installing the SR-IOV Network Operator Configuring an SR-IOV network device Dynamic IP address assignment configuration with Whereabouts Disabling interrupt processing for individual pods Configuring an SR-IOV Ethernet network attachment 18.9. Using pod-level bonding Bonding at the pod level is vital to enable workloads inside pods that require high availability and more throughput. With pod-level bonding, you can create a bond interface from multiple single root I/O virtualization (SR-IOV) virtual function interfaces in a kernel mode interface. The SR-IOV virtual functions are passed into the pod and attached to a kernel driver. One scenario where pod level bonding is required is creating a bond interface from multiple SR-IOV virtual functions on different physical functions. Creating a bond interface from two different physical functions on the host can be used to achieve high availability and throughput at pod level. Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . For guidance on tasks such as creating a SR-IOV network, network policies, network attachment definitions and pods, see Configuring an SR-IOV network device . 18.9.1. Configuring a bond interface from two SR-IOV interfaces Bonding enables multiple network interfaces to be aggregated into a single logical "bonded" interface. Bond Container Network Interface (Bond-CNI) brings bond capability into containers. Bond-CNI can be created using Single Root I/O Virtualization (SR-IOV) virtual functions and placing them in the container network namespace. OpenShift Container Platform only supports Bond-CNI using SR-IOV virtual functions. The SR-IOV Network Operator provides the SR-IOV CNI plugin needed to manage the virtual functions. Other CNIs or types of interfaces are not supported. Prerequisites The SR-IOV Network Operator must be installed and configured to obtain virtual functions in a container. To configure SR-IOV interfaces, an SR-IOV network and policy must be created for each interface. The SR-IOV Network Operator creates a network attachment definition for each SR-IOV interface, based on the SR-IOV network and policy defined. The linkState is set to the default value auto for the SR-IOV virtual function. 18.9.1.1. Creating a bond network attachment definition Now that the SR-IOV virtual functions are available, you can create a bond network attachment definition. apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bond-net1 namespace: demo spec: config: '{ "type": "bond", 1 "cniVersion": "0.3.1", "name": "bond-net1", "mode": "active-backup", 2 "failOverMac": 1, 3 "linksInContainer": true, 4 "miimon": "100", "mtu": 1500, "links": [ 5 {"name": "net1"}, {"name": "net2"} ], "ipam": { "type": "host-local", "subnet": "10.56.217.0/24", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } }' 1 The cni-type is always set to bond . 2 The mode attribute specifies the bonding mode. Note The bonding modes supported are: balance-rr - 0 active-backup - 1 balance-xor - 2 For balance-rr or balance-xor modes, you must set the trust mode to on for the SR-IOV virtual function. 3 The failover attribute is mandatory for active-backup mode and must be set to 1. 4 The linksInContainer=true flag informs the Bond CNI that the required interfaces are to be found inside the container. By default, Bond CNI looks for these interfaces on the host which does not work for integration with SRIOV and Multus. 5 The links section defines which interfaces will be used to create the bond. By default, Multus names the attached interfaces as: "net", plus a consecutive number, starting with one. 18.9.1.2. Creating a pod using a bond interface Test the setup by creating a pod with a YAML file named for example podbonding.yaml with content similar to the following: apiVersion: v1 kind: Pod metadata: name: bondpod1 namespace: demo annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1 1 spec: containers: - name: podexample image: quay.io/openshift/origin-network-interface-bond-cni:4.11.0 command: ["/bin/bash", "-c", "sleep INF"] 1 Note the network annotation: it contains two SR-IOV network attachments, and one bond network attachment. The bond attachment uses the two SR-IOV interfaces as bonded port interfaces. Apply the yaml by running the following command: USD oc apply -f podbonding.yaml Inspect the pod interfaces with the following command: USD oc rsh -n demo bondpod1 sh-4.4# sh-4.4# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if150: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 62:b1:b5:c8:fb:7a brd ff:ff:ff:ff:ff:ff inet 10.244.1.122/24 brd 10.244.1.255 scope global eth0 valid_lft forever preferred_lft forever 4: net3: <BROADCAST,MULTICAST,UP,LOWER_UP400> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 1 inet 10.56.217.66/24 scope global bond0 valid_lft forever preferred_lft forever 43: net1: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 2 44: net2: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 3 1 The bond interface is automatically named net3 . To set a specific interface name add @name suffix to the pod's k8s.v1.cni.cncf.io/networks annotation. 2 The net1 interface is based on an SR-IOV virtual function. 3 The net2 interface is based on an SR-IOV virtual function. Note If no interface names are configured in the pod annotation, interface names are assigned automatically as net<n> , with <n> starting at 1 . Optional: If you want to set a specific interface name for example bond0 , edit the k8s.v1.cni.cncf.io/networks annotation and set bond0 as the interface name as follows: annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1@bond0 18.10. Configuring hardware offloading As a cluster administrator, you can configure hardware offloading on compatible nodes to increase data processing performance and reduce load on host CPUs. Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . 18.10.1. About hardware offloading Open vSwitch hardware offloading is a method of processing network tasks by diverting them away from the CPU and offloading them to a dedicated processor on a network interface controller. As a result, clusters can benefit from faster data transfer speeds, reduced CPU workloads, and lower computing costs. The key element for this feature is a modern class of network interface controllers known as SmartNICs. A SmartNIC is a network interface controller that is able to handle computationally-heavy network processing tasks. In the same way that a dedicated graphics card can improve graphics performance, a SmartNIC can improve network performance. In each case, a dedicated processor improves performance for a specific type of processing task. In OpenShift Container Platform, you can configure hardware offloading for bare metal nodes that have a compatible SmartNIC. Hardware offloading is configured and enabled by the SR-IOV Network Operator. Hardware offloading is not compatible with all workloads or application types. Only the following two communication types are supported: pod-to-pod pod-to-service, where the service is a ClusterIP service backed by a regular pod In all cases, hardware offloading takes place only when those pods and services are assigned to nodes that have a compatible SmartNIC. Suppose, for example, that a pod on a node with hardware offloading tries to communicate with a service on a regular node. On the regular node, all the processing takes place in the kernel, so the overall performance of the pod-to-service communication is limited to the maximum performance of that regular node. Hardware offloading is not compatible with DPDK applications. Enabling hardware offloading on a node, but not configuring pods to use, it can result in decreased throughput performance for pod traffic. You cannot configure hardware offloading for pods that are managed by OpenShift Container Platform. 18.10.2. Supported devices Hardware offloading is supported on the following network interface controllers: Table 18.15. Supported network interface controllers Manufacturer Model Vendor ID Device ID Mellanox MT27800 Family [ConnectX‐5] 15b3 1017 Mellanox MT28880 Family [ConnectX‐5 Ex] 15b3 1019 Mellanox MT2892 Family [ConnectX‐6 Dx] 15b3 101d Mellanox MT2894 Family [ConnectX-6 Lx] 15b3 101f Mellanox MT42822 BlueField-2 in ConnectX-6 NIC mode 15b3 a2d6 18.10.3. Prerequisites Your cluster has at least one bare metal machine with a network interface controller that is supported for hardware offloading. You installed the SR-IOV Network Operator . Your cluster uses the OVN-Kubernetes network plugin . In your OVN-Kubernetes network plugin configuration , the gatewayConfig.routingViaHost field is set to false . 18.10.4. Setting the SR-IOV Network Operator into systemd mode To support hardware offloading, you must first set the SR-IOV Network Operator into systemd mode. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user that has the cluster-admin role. Procedure Create a SriovOperatorConfig custom resource (CR) to deploy all the SR-IOV Operator components: Create a file named sriovOperatorConfig.yaml that contains the following YAML: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default 1 namespace: openshift-sriov-network-operator spec: enableInjector: true enableOperatorWebhook: true configurationMode: "systemd" 2 logLevel: 2 1 The only valid name for the SriovOperatorConfig resource is default and it must be in the namespace where the Operator is deployed. 2 Setting the SR-IOV Network Operator into systemd mode is only relevant for Open vSwitch hardware offloading. Create the resource by running the following command: USD oc apply -f sriovOperatorConfig.yaml 18.10.5. Configuring a machine config pool for hardware offloading To enable hardware offloading, you now create a dedicated machine config pool and configure it to work with the SR-IOV Network Operator. Prerequisites SR-IOV Network Operator installed and set into systemd mode. Procedure Create a machine config pool for machines you want to use hardware offloading on. Create a file, such as mcp-offloading.yaml , with content like the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-offloading 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-offloading]} 2 nodeSelector: matchLabels: node-role.kubernetes.io/mcp-offloading: "" 3 1 2 The name of your machine config pool for hardware offloading. 3 This node role label is used to add nodes to the machine config pool. Apply the configuration for the machine config pool: USD oc create -f mcp-offloading.yaml Add nodes to the machine config pool. Label each node with the node role label of your pool: USD oc label node worker-2 node-role.kubernetes.io/mcp-offloading="" Optional: To verify that the new pool is created, run the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 2d v1.29.4 master-1 Ready master 2d v1.29.4 master-2 Ready master 2d v1.29.4 worker-0 Ready worker 2d v1.29.4 worker-1 Ready worker 2d v1.29.4 worker-2 Ready mcp-offloading,worker 47h v1.29.4 worker-3 Ready mcp-offloading,worker 47h v1.29.4 Add this machine config pool to the SriovNetworkPoolConfig custom resource: Create a file, such as sriov-pool-config.yaml , with content like the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkPoolConfig metadata: name: sriovnetworkpoolconfig-offload namespace: openshift-sriov-network-operator spec: ovsHardwareOffloadConfig: name: mcp-offloading 1 1 The name of your machine config pool for hardware offloading. Apply the configuration: USD oc create -f <SriovNetworkPoolConfig_name>.yaml Note When you apply the configuration specified in a SriovNetworkPoolConfig object, the SR-IOV Operator drains and restarts the nodes in the machine config pool. It might take several minutes for a configuration changes to apply. 18.10.6. Configuring the SR-IOV network node policy You can create an SR-IOV network device configuration for a node by creating an SR-IOV network node policy. To enable hardware offloading, you must define the .spec.eSwitchMode field with the value "switchdev" . The following procedure creates an SR-IOV interface for a network interface controller with hardware offloading. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Create a file, such as sriov-node-policy.yaml , with content like the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy 1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 2 eSwitchMode: "switchdev" 3 nicSelector: deviceID: "1019" rootDevices: - 0000:d8:00.0 vendor: "15b3" pfNames: - ens8f0 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 6 priority: 5 resourceName: mlxnics 1 The name for the custom resource object. 2 Required. Hardware offloading is not supported with vfio-pci . 3 Required. Apply the configuration for the policy: USD oc create -f sriov-node-policy.yaml Note When you apply the configuration specified in a SriovNetworkPoolConfig object, the SR-IOV Operator drains and restarts the nodes in the machine config pool. It might take several minutes for a configuration change to apply. 18.10.6.1. An example SR-IOV network node policy for OpenStack The following example describes an SR-IOV interface for a network interface controller (NIC) with hardware offloading on Red Hat OpenStack Platform (RHOSP). An SR-IOV interface for a NIC with hardware offloading on RHOSP apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USD{name} namespace: openshift-sriov-network-operator spec: deviceType: switchdev isRdma: true nicSelector: netFilter: openstack/NetworkID:USD{net_id} nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: USD{name} 18.10.7. Improving network traffic performance using a virtual function Follow this procedure to assign a virtual function to the OVN-Kubernetes management port and increase its network traffic performance. This procedure results in the creation of two pools: the first has a virtual function used by OVN-Kubernetes, and the second comprises the remaining virtual functions. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Add the network.operator.openshift.io/smart-nic label to each worker node with a SmartNIC present by running the following command: USD oc label node <node-name> network.operator.openshift.io/smart-nic= Use the oc get nodes command to get a list of the available nodes. Create a policy named sriov-node-mgmt-vf-policy.yaml for the management port with content such as the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-mgmt-vf-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: "switchdev" nicSelector: deviceID: "1019" rootDevices: - 0000:d8:00.0 vendor: "15b3" pfNames: - ens8f0#0-0 1 nodeSelector: network.operator.openshift.io/smart-nic: "" numVfs: 6 2 priority: 5 resourceName: mgmtvf 1 Replace this device with the appropriate network device for your use case. The #0-0 part of the pfNames value reserves a single virtual function used by OVN-Kubernetes. 2 The value provided here is an example. Replace this value with one that meets your requirements. For more information, see SR-IOV network node configuration object in the Additional resources section. Create a policy named sriov-node-policy.yaml with content such as the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: "switchdev" nicSelector: deviceID: "1019" rootDevices: - 0000:d8:00.0 vendor: "15b3" pfNames: - ens8f0#1-5 1 nodeSelector: network.operator.openshift.io/smart-nic: "" numVfs: 6 2 priority: 5 resourceName: mlxnics 1 Replace this device with the appropriate network device for your use case. 2 The value provided here is an example. Replace this value with the value specified in the sriov-node-mgmt-vf-policy.yaml file. For more information, see SR-IOV network node configuration object in the Additional resources section. Note The sriov-node-mgmt-vf-policy.yaml file has different values for the pfNames and resourceName keys than the sriov-node-policy.yaml file. Apply the configuration for both policies: USD oc create -f sriov-node-policy.yaml USD oc create -f sriov-node-mgmt-vf-policy.yaml Create a Cluster Network Operator (CNO) ConfigMap in the cluster for the management configuration: Create a ConfigMap named hardware-offload-config.yaml with the following contents: apiVersion: v1 kind: ConfigMap metadata: name: hardware-offload-config namespace: openshift-network-operator data: mgmt-port-resource-name: openshift.io/mgmtvf Apply the configuration for the ConfigMap: USD oc create -f hardware-offload-config.yaml Additional resources SR-IOV network node configuration object 18.10.8. Creating a network attachment definition After you define the machine config pool and the SR-IOV network node policy, you can create a network attachment definition for the network interface card you specified. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Create a file, such as net-attach-def.yaml , with content like the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: net-attach-def 1 namespace: net-attach-def 2 annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/mlxnics 3 spec: config: '{"cniVersion":"0.3.1","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{}}' 1 The name for your network attachment definition. 2 The namespace for your network attachment definition. 3 This is the value of the spec.resourceName field you specified in the SriovNetworkNodePolicy object. Apply the configuration for the network attachment definition: USD oc create -f net-attach-def.yaml Verification Run the following command to see whether the new definition is present: USD oc get net-attach-def -A Example output NAMESPACE NAME AGE net-attach-def net-attach-def 43h 18.10.9. Adding the network attachment definition to your pods After you create the machine config pool, the SriovNetworkPoolConfig and SriovNetworkNodePolicy custom resources, and the network attachment definition, you can apply these configurations to your pods by adding the network attachment definition to your pod specifications. Procedure In the pod specification, add the .metadata.annotations.k8s.v1.cni.cncf.io/networks field and specify the network attachment definition you created for hardware offloading: .... metadata: annotations: v1.multus-cni.io/default-network: net-attach-def/net-attach-def 1 1 The value must be the name and namespace of the network attachment definition you created for hardware offloading. 18.11. Switching Bluefield-2 from DPU to NIC You can switch the Bluefield-2 network device from data processing unit (DPU) mode to network interface controller (NIC) mode. Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator . 18.11.1. Switching Bluefield-2 from DPU mode to NIC mode Use the following procedure to switch Bluefield-2 from data processing units (DPU) mode to network interface controller (NIC) mode. Important Currently, only switching Bluefield-2 from DPU to NIC mode is supported. Switching from NIC mode to DPU mode is unsupported. Prerequisites You have installed the SR-IOV Network Operator. For more information, see "Installing SR-IOV Network Operator". You have updated Bluefield-2 to the latest firmware. For more information, see Firmware for NVIDIA BlueField-2 . Procedure Add the following labels to each of your worker nodes by entering the following commands: USD oc label node <example_node_name_one> node-role.kubernetes.io/sriov= USD oc label node <example_node_name_two> node-role.kubernetes.io/sriov= Create a machine config pool for the SR-IOV Network Operator, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: sriov spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,sriov]} nodeSelector: matchLabels: node-role.kubernetes.io/sriov: "" Apply the following machineconfig.yaml file to the worker nodes: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: sriov name: 99-bf2-dpu spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ZmluZF9jb250YWluZXIoKSB7CiAgY3JpY3RsIHBzIC1vIGpzb24gfCBqcSAtciAnLmNvbnRhaW5lcnNbXSB8IHNlbGVjdCgubWV0YWRhdGEubmFtZT09InNyaW92LW5ldHdvcmstY29uZmlnLWRhZW1vbiIpIHwgLmlkJwp9CnVudGlsIG91dHB1dD0kKGZpbmRfY29udGFpbmVyKTsgW1sgLW4gIiRvdXRwdXQiIF1dOyBkbwogIGVjaG8gIndhaXRpbmcgZm9yIGNvbnRhaW5lciB0byBjb21lIHVwIgogIHNsZWVwIDE7CmRvbmUKISBzdWRvIGNyaWN0bCBleGVjICRvdXRwdXQgL2JpbmRhdGEvc2NyaXB0cy9iZjItc3dpdGNoLW1vZGUuc2ggIiRAIgo= mode: 0755 overwrite: true path: /etc/default/switch_in_sriov_config_daemon.sh systemd: units: - name: dpu-switch.service enabled: true contents: | [Unit] Description=Switch BlueField2 card to NIC/DPU mode RequiresMountsFor=%t/containers Wants=network.target After=network-online.target kubelet.service [Service] SuccessExitStatus=0 120 RemainAfterExit=True ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic || shutdown -r now' 1 Type=oneshot [Install] WantedBy=multi-user.target 1 Optional: The PCI address of a specific card can optionally be specified, for example ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic 0000:5e:00.0 || echo done' . By default, the first device is selected. If there is more than one device, you must specify which PCI address to be used. The PCI address must be the same on all nodes that are switching Bluefield-2 from DPU mode to NIC mode. Wait for the worker nodes to restart. After restarting, the Bluefield-2 network device on the worker nodes is switched into NIC mode. Optional: You might need to restart the host hardware because most recent Bluefield-2 firmware releases require a hardware restart to switch into NIC mode. Additional resources Installing SR-IOV Network Operator | [
"oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 externallyManaged: false 9 nicSelector: 10 vendor: \"<vendor_code>\" 11 deviceID: \"<device_id>\" 12 pfNames: [\"<pf_name>\", ...] 13 rootDevices: [\"<pci_bus_id>\", ...] 14 netFilter: \"<filter_string>\" 15 deviceType: <device_type> 16 isRdma: false 17 linkType: <link_type> 18 eSwitchMode: \"switchdev\" 19 excludeTopology: false 20",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: <num> nicSelector: vendor: \"<vendor_code>\" deviceID: \"<device_id>\" rootDevices: - \"<pci_bus_id>\" linkType: <link_type> isRdma: true",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 1 1 nicSelector: vendor: \"<vendor_code>\" deviceID: \"<device_id>\" netFilter: \"openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509\" 2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded",
"pfNames: [\"netpf0#2-7\"]",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#0-0\"] deviceType: netdevice",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#8-15\"] deviceType: vfio-pci",
"ip link show <interface> 1",
"5: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 3c:fd:fe:d1:bc:01 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 5a:e7:88:25:ea:a0 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 3e:1d:36:d7:3d:49 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 2 link/ether ce:09:56:97:df:f9 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether 5e:91:cf:88:d1:38 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether e6:06:a1:96:2f:de brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off",
"apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/sriov1: 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/sriov1: 1 volumeMounts: - mountPath: /dev/hugepages name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 1 volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: k8s.v1.cni.cncf.io/networks: hwoffload1 spec: runtimeClassName: performance-cnf-performanceprofile 1 containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: [\"sleep\", \"infinity\"] resources: limits: memory: \"1Gi\" 3 cpu: \"2\" 4 requests: memory: \"1Gi\" cpu: \"2\"",
"oc create -f <filename> 1",
"oc describe pod sample-pod",
"oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus",
"oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>",
"\"lastSyncError\": \"write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: \"<trust_vf>\" 12 capabilities: <capabilities> 13",
"cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }",
"oc exec -it mypod -- ip a",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/29\", \"network_name\": \"example_net_common\", 1 } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/24\", \"network_name\": \"example_net_common\", 1 } }",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }",
"oc create -f <name>.yaml",
"oc get net-attach-def -n <namespace>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } vlan: 0 resourceName: intelnics metaPlugins : | { \"type\": \"vrf\", 1 \"vrfname\": \"example-vrf-name\" 2 }",
"oc create -f sriov-network-attachment.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE additional-sriov-network-1 14m",
"ip vrf show",
"Name Table ----------------------- red 10",
"ip link",
"5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode",
"[ { \"name\": \"<name>\", 1 \"mac\": \"<mac_address>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]",
"oc create -f <name>.yaml",
"oc get pod <name> -o yaml",
"oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:",
"apiVersion: v1 kind: SriovNetworkPoolConfig metadata: name: pool-1 1 namespace: openshift-sriov-network-operator 2 spec: maxUnavailable: 2 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/worker: \"\"",
"oc create -f sriov-nw-pool.yaml",
"oc create namespace sriov-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: [\"ens1\"] nodeSelector: node-role.kubernetes.io/worker: \"\" numVfs: 5 priority: 99 resourceName: sriov_nic_1",
"oc create -f sriov-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: linkState: auto networkNamespace: sriov-test resourceName: sriov_nic_1 capabilities: '{ \"mac\": true, \"ips\": true }' ipam: '{ \"type\": \"static\" }'",
"oc create -f sriov-network.yaml",
"oc get sriovNetworkpoolConfig -n openshift-sriov-network-operator",
"NAME AGE pool-1 67s 1",
"oc patch SriovNetworkNodePolicy sriov-nic-1 -n openshift-sriov-network-operator --type merge -p '{\"spec\": {\"numVfs\": 4}}'",
"oc get sriovNetworkNodeState -n openshift-sriov-network-operator",
"NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 InProgress Drain_Required DrainComplete 3d10h openshift-sriov-network-operator worker-1 InProgress Drain_Required DrainComplete 3d10h",
"NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 Succeeded Idle Idle 3d10h openshift-sriov-network-operator worker-1 Succeeded Idle Idle 3d10h",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <policy_name> namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 1 nodeSelector: kubernetes.io/hostname: <node_name> numVfs: <number_of_Vfs> nicSelector: 2 vendor: \"<vendor_ID>\" deviceID: \"<device_ID>\" deviceType: netdevice excludeTopology: true 3",
"oc create -f sriov-network-node-policy.yaml",
"sriovnetworknodepolicy.sriovnetwork.openshift.io/policy-for-numa-0 created",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-numa-0-network 1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 2 networkNamespace: <namespace> 3 ipam: |- 4 { \"type\": \"<ipam_type>\", }",
"oc create -f sriov-network.yaml",
"sriovnetwork.sriovnetwork.openshift.io/sriov-numa-0-network created",
"apiVersion: v1 kind: Pod metadata: name: <pod_name> annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"sriov-numa-0-network\", 1 } ] spec: containers: - name: <container_name> image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"oc create -f sriov-network-pod.yaml",
"pod/example-pod created",
"oc get pod <pod_name>",
"NAME READY STATUS RESTARTS AGE test-deployment-sriov-76cbbf4756-k9v72 1/1 Running 0 45h",
"oc debug pod/<pod_name>",
"chroot /host",
"lscpu | grep NUMA",
"NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18, NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,",
"cat /proc/self/status | grep Cpus",
"Cpus_allowed: aa Cpus_allowed_list: 1,3,5,7",
"cat /sys/class/net/net1/device/numa_node",
"0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7",
"cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }",
"oc exec -it mypod -- ip a",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/29\", \"network_name\": \"example_net_common\", 1 } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/24\", \"network_name\": \"example_net_common\", 1 } }",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }",
"oc create -f <name>.yaml",
"oc get net-attach-def -n <namespace>",
"[ { \"name\": \"<network_attachment>\", 1 \"infiniband-guid\": \"<guid>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"ib1\", \"infiniband-guid\": \"c2:11:22:33:44:55:66:77\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]",
"oc create -f <name>.yaml",
"oc get pod <name> -o yaml",
"oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:",
"oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"oc create namespace sysctl-tuning-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyoneflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 nodeSelector: 4 feature.node.kubernetes.io/network-sriov.capable=\"true\" priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens5\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10",
"oc create -f policyoneflag-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"Succeeded",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: onevalidflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 networkNamespace: sysctl-tuning-test 4 ipam: '{ \"type\": \"static\" }' 5 capabilities: '{ \"mac\": true, \"ips\": true }' 6 metaPlugins : | 7 { \"type\": \"tuning\", \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } }",
"oc create -f sriov-network-interface-sysctl.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE onevalidflag 14m",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"onevalidflag\", 1 \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod -n sysctl-tuning-test",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh -n sysctl-tuning-test tunepod",
"sysctl net.ipv4.conf.net1.accept_redirects",
"net.ipv4.conf.net1.accept_redirects = 1",
"oc create namespace sysctl-tuning-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyallflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 nodeSelector: 4 node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = `true` priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens1f0\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10",
"oc create -f policyallflags-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"Succeeded",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: allvalidflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 networkNamespace: sysctl-tuning-test 4 capabilities: '{ \"mac\": true, \"ips\": true }' 5",
"oc create -f sriov-network-attachment.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-sysctl-network namespace: sysctl-tuning-test spec: config: '{ \"cniVersion\":\"0.4.0\", \"name\":\"bound-net\", \"plugins\":[ { \"type\":\"bond\", 1 \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\":{ 6 \"type\":\"static\" } }, { \"type\":\"tuning\", 7 \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv4.conf.IFNAME.accept_source_route\": \"0\", \"net.ipv4.conf.IFNAME.disable_policy\": \"1\", \"net.ipv4.conf.IFNAME.secure_redirects\": \"0\", \"net.ipv4.conf.IFNAME.send_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_source_route\": \"1\", \"net.ipv6.neigh.IFNAME.base_reachable_time_ms\": \"20000\", \"net.ipv6.neigh.IFNAME.retrans_time_ms\": \"2000\" } } ] }'",
"oc create -f sriov-bond-network-interface.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE bond-sysctl-network 22m allvalidflags 47m",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ {\"name\": \"allvalidflags\"}, 1 {\"name\": \"allvalidflags\"}, { \"name\": \"bond-sysctl-network\", \"interface\": \"bond0\", \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod -n sysctl-tuning-test",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh -n sysctl-tuning-test tunepod",
"sysctl net.ipv6.neigh.bond0.base_reachable_time_ms",
"net.ipv6.neigh.bond0.base_reachable_time_ms = 20000",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-mlx namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: deviceID: \"1017\" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: \"15b3\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 10 priority: 99 resourceName: resourcemlx",
"oc create -f sriovnetpolicy-mlx.yaml",
"oc create namespace enable-allmulti-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: enableallmulti 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: enableallmulti 3 networkNamespace: enable-allmulti-test 4 ipam: '{ \"type\": \"static\" }' 5 capabilities: '{ \"mac\": true, \"ips\": true }' 6 trust: \"on\" 7 metaPlugins : | 8 { \"type\": \"tuning\", \"capabilities\":{ \"mac\":true }, \"allmulti\": true } }",
"oc create -f sriov-enable-all-multicast.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE enableallmulti 14m",
"oc get sriovnetwork -n openshift-sriov-network-operator",
"apiVersion: v1 kind: Pod metadata: name: samplepod namespace: enable-allmulti-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"enableallmulti\", 1 \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod -n enable-allmulti-test",
"NAME READY STATUS RESTARTS AGE samplepod 1/1 Running 0 47s",
"oc rsh -n enable-allmulti-test samplepod",
"sh-4.4# ip link",
"1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-810 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: - ens5f0#0-9 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: resource810",
"oc create -f sriovnetpolicy-810-sriov-node-network.yaml",
"watch -n 1 'oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath=\"{.status.syncStatus}\"'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriovnetwork-1ad-810 namespace: openshift-sriov-network-operator spec: ipam: '{}' vlan: 171 1 vlanProto: \"802.1ad\" 2 networkNamespace: default resourceName: resource810",
"oc create -f nad-sriovnetwork-1ad-810.yaml",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: nad-cvlan100 namespace: default spec: config: '{ \"name\": \"vlan-100\", \"cniVersion\": \"0.3.1\", \"type\": \"vlan\", \"linkInContainer\": true, \"master\": \"net1\", 1 \"vlanId\": 100, \"ipam\": {\"type\": \"static\"} }'",
"oc apply -f nad-cvlan100.yaml",
"apiVersion: v1 kind: Pod metadata: name: test-pod annotations: k8s.v1.cni.cncf.io/networks: sriovnetwork-1ad-810, nad-cvlan100 spec: containers: - name: test-container image: quay.io/ocp-edge-qe/cnf-gotests-client:v4.10 imagePullPolicy: Always securityContext: privileged: true",
"oc create -f test-qinq-pod.yaml",
"oc debug node/my-cluster-node -- bash -c \"ip link show ens5f0\"",
"6: ens5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:a5:22:10 brd ff:ff:ff:ff:ff:ff vf 0 link/ether a2:81:ba:d0:6f:f3 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 8a:bb:0a:36:f2:ed brd ff:ff:ff:ff:ff:ff, vlan 171, vlan protocol 802.1ad, spoof checking on, link-state auto, trust off vf 2 link/ether ca:0e:e1:5b:0c:d2 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether ee:6c:e2:f5:2c:70 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether 0a:d6:b7:66:5e:e8 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 5 link/ether da:d5:e7:14:4f:aa brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 6 link/ether d6:8e:85:75:12:5c brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 7 link/ether d6:eb:ce:9c:ea:78 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 8 link/ether 5e:c5:cc:05:93:3c brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust on vf 9 link/ether a6:5a:7c:1c:2a:16 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"8086\" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0']",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { \"type\": \"host-local\", 2 \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [ {\"dst\": \"224.0.0.0/5\"}, {\"dst\": \"232.0.0.0/5\"} ], \"gateway\": \"10.56.217.1\" } resourceName: example",
"apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: [\"NET_ADMIN\"] 1 command: [ \"sleep\", \"infinity\"]",
"apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] command: [\"sleep\", \"infinity\"]",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" requests: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"8086\" deviceID: \"158b\" pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: vfio-pci 1",
"oc create -f intel-dpdk-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- ... 1 vlan: <vlan> resourceName: intelnics",
"oc create -f intel-dpdk-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/intelnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/intelnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f intel-dpdk-pod.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3",
"oc create -f mlx-dpdk-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics",
"oc create -f mlx-dpdk-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/mlxnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/mlxnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f mlx-dpdk-pod.yaml",
"apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: \"false\"",
"oc apply -f test-namespace.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 needVhostNet: true 3 nicSelector: vendor: \"15b3\" 4 deviceID: \"101b\" 5 rootDevices: [\"00:05.0\"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"",
"oc create -f sriov-node-network-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: \"off\" trust: \"on\"",
"oc create -f sriov-network-attachment.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tap-one namespace: test-namespace 1 spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tap\", \"plugins\": [ { \"type\": \"tap\", \"multiQueue\": true, \"selinuxcontext\": \"system_u:system_r:container_t:s0\" }, { \"type\":\"tuning\", \"capabilities\":{ \"mac\":true } } ] }'",
"oc apply -f tap-example.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: test-namespace 1 annotations: k8s.v1.cni.cncf.io/networks: '[ {\"name\": \"sriov-network\", \"namespace\": \"test-namespace\"}, {\"name\": \"tap-one\", \"interface\": \"ext0\", \"namespace\": \"test-namespace\"}]' spec: nodeSelector: kubernetes.io/hostname: \"worker-0\" securityContext: fsGroup: 1001 2 runAsGroup: 1001 3 seccompProfile: type: RuntimeDefault containers: - name: testpmd image: <DPDK_image> 4 securityContext: capabilities: drop: [\"ALL\"] 5 add: 6 - IPC_LOCK - NET_RAW #for mlx only 7 runAsUser: 1001 8 privileged: false 9 allowPrivilegeEscalation: true 10 runAsNonRoot: true 11 volumeMounts: - mountPath: /mnt/huge 12 name: hugepages resources: limits: openshift.io/sriovnic: \"1\" 13 memory: \"1Gi\" cpu: \"4\" 14 hugepages-1Gi: \"4Gi\" 15 requests: openshift.io/sriovnic: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] runtimeClassName: performance-cnf-performanceprofile 16 volumes: - name: hugepages emptyDir: medium: HugePages",
"oc create -f dpdk-pod-rootless.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: globallyDisableIrqLoadBalancing: true cpu: isolated: 21-51,73-103 1 reserved: 0-20,52-72 2 hugepages: defaultHugepagesSize: 1G 3 pages: - count: 32 size: 1G net: userLevelNetworking: true numa: topologyPolicy: \"single-numa-node\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"oc create -f mlx-dpdk-perfprofile-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 1 needVhostNet: true 2 nicSelector: pfNames: [\"ens3f0\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci needVhostNet: true nicSelector: pfNames: [\"ens3f1\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 nicSelector: rootDevices: - \"0000:5e:00.1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-2 namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: rootDevices: - \"0000:5e:00.0\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-1 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.1.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' 1 networkNamespace: dpdk-test 2 spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_1 3 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-2 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.2.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' networkNamespace: dpdk-test spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_2",
"apiVersion: v1 kind: Namespace metadata: name: dpdk-test --- apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ 1 { \"name\": \"dpdk-network-1\", \"namespace\": \"dpdk-test\" }, { \"name\": \"dpdk-network-2\", \"namespace\": \"dpdk-test\" } ]' irq-load-balancing.crio.io: \"disable\" 2 cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" labels: app: dpdk name: testpmd namespace: dpdk-test spec: runtimeClassName: performance-performance 3 containers: - command: - /bin/bash - -c - sleep INF image: registry.redhat.io/openshift4/dpdk-base-rhel8 imagePullPolicy: Always name: dpdk resources: 4 limits: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi requests: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi securityContext: capabilities: add: - IPC_LOCK - SYS_RESOURCE - NET_RAW - NET_ADMIN runAsUser: 0 volumeMounts: - mountPath: /mnt/huge name: hugepages terminationGracePeriodSeconds: 5 volumes: - emptyDir: medium: HugePages name: hugepages",
"#!/bin/bash set -ex export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) echo USD{CPU} dpdk-testpmd -l USD{CPU} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_1} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_2} -n 4 -- -i --nb-cores=15 --rxd=4096 --txd=4096 --rxq=7 --txq=7 --forward-mode=mac --eth-peer=0,50:00:00:00:00:01 --eth-peer=1,50:00:00:00:00:02",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3",
"oc create -f mlx-rdma-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics",
"oc create -f mlx-rdma-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: memory: \"1Gi\" cpu: \"4\" 5 hugepages-1Gi: \"4Gi\" 6 requests: memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f mlx-rdma-pod.yaml",
"apiVersion: v1 kind: Pod metadata: name: testpmd-dpdk namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/dpdk1: 1 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/dpdk1: 1 volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 2 volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-net1 namespace: demo spec: config: '{ \"type\": \"bond\", 1 \"cniVersion\": \"0.3.1\", \"name\": \"bond-net1\", \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"mtu\": 1500, \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } }'",
"apiVersion: v1 kind: Pod metadata: name: bondpod1 namespace: demo annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1 1 spec: containers: - name: podexample image: quay.io/openshift/origin-network-interface-bond-cni:4.11.0 command: [\"/bin/bash\", \"-c\", \"sleep INF\"]",
"oc apply -f podbonding.yaml",
"oc rsh -n demo bondpod1 sh-4.4# sh-4.4# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if150: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 62:b1:b5:c8:fb:7a brd ff:ff:ff:ff:ff:ff inet 10.244.1.122/24 brd 10.244.1.255 scope global eth0 valid_lft forever preferred_lft forever 4: net3: <BROADCAST,MULTICAST,UP,LOWER_UP400> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 1 inet 10.56.217.66/24 scope global bond0 valid_lft forever preferred_lft forever 43: net1: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 2 44: net2: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 3",
"annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1@bond0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default 1 namespace: openshift-sriov-network-operator spec: enableInjector: true enableOperatorWebhook: true configurationMode: \"systemd\" 2 logLevel: 2",
"oc apply -f sriovOperatorConfig.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-offloading 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-offloading]} 2 nodeSelector: matchLabels: node-role.kubernetes.io/mcp-offloading: \"\" 3",
"oc create -f mcp-offloading.yaml",
"oc label node worker-2 node-role.kubernetes.io/mcp-offloading=\"\"",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 2d v1.29.4 master-1 Ready master 2d v1.29.4 master-2 Ready master 2d v1.29.4 worker-0 Ready worker 2d v1.29.4 worker-1 Ready worker 2d v1.29.4 worker-2 Ready mcp-offloading,worker 47h v1.29.4 worker-3 Ready mcp-offloading,worker 47h v1.29.4",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkPoolConfig metadata: name: sriovnetworkpoolconfig-offload namespace: openshift-sriov-network-operator spec: ovsHardwareOffloadConfig: name: mcp-offloading 1",
"oc create -f <SriovNetworkPoolConfig_name>.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy 1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 2 eSwitchMode: \"switchdev\" 3 nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 6 priority: 5 resourceName: mlxnics",
"oc create -f sriov-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USD{name} namespace: openshift-sriov-network-operator spec: deviceType: switchdev isRdma: true nicSelector: netFilter: openstack/NetworkID:USD{net_id} nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: USD{name}",
"oc label node <node-name> network.operator.openshift.io/smart-nic=",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-mgmt-vf-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: \"switchdev\" nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0#0-0 1 nodeSelector: network.operator.openshift.io/smart-nic: \"\" numVfs: 6 2 priority: 5 resourceName: mgmtvf",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: \"switchdev\" nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0#1-5 1 nodeSelector: network.operator.openshift.io/smart-nic: \"\" numVfs: 6 2 priority: 5 resourceName: mlxnics",
"oc create -f sriov-node-policy.yaml",
"oc create -f sriov-node-mgmt-vf-policy.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: hardware-offload-config namespace: openshift-network-operator data: mgmt-port-resource-name: openshift.io/mgmtvf",
"oc create -f hardware-offload-config.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: net-attach-def 1 namespace: net-attach-def 2 annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/mlxnics 3 spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"ovn-kubernetes\",\"type\":\"ovn-k8s-cni-overlay\",\"ipam\":{},\"dns\":{}}'",
"oc create -f net-attach-def.yaml",
"oc get net-attach-def -A",
"NAMESPACE NAME AGE net-attach-def net-attach-def 43h",
". metadata: annotations: v1.multus-cni.io/default-network: net-attach-def/net-attach-def 1",
"oc label node <example_node_name_one> node-role.kubernetes.io/sriov=",
"oc label node <example_node_name_two> node-role.kubernetes.io/sriov=",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: sriov spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,sriov]} nodeSelector: matchLabels: node-role.kubernetes.io/sriov: \"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: sriov name: 99-bf2-dpu spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ZmluZF9jb250YWluZXIoKSB7CiAgY3JpY3RsIHBzIC1vIGpzb24gfCBqcSAtciAnLmNvbnRhaW5lcnNbXSB8IHNlbGVjdCgubWV0YWRhdGEubmFtZT09InNyaW92LW5ldHdvcmstY29uZmlnLWRhZW1vbiIpIHwgLmlkJwp9CnVudGlsIG91dHB1dD0kKGZpbmRfY29udGFpbmVyKTsgW1sgLW4gIiRvdXRwdXQiIF1dOyBkbwogIGVjaG8gIndhaXRpbmcgZm9yIGNvbnRhaW5lciB0byBjb21lIHVwIgogIHNsZWVwIDE7CmRvbmUKISBzdWRvIGNyaWN0bCBleGVjICRvdXRwdXQgL2JpbmRhdGEvc2NyaXB0cy9iZjItc3dpdGNoLW1vZGUuc2ggIiRAIgo= mode: 0755 overwrite: true path: /etc/default/switch_in_sriov_config_daemon.sh systemd: units: - name: dpu-switch.service enabled: true contents: | [Unit] Description=Switch BlueField2 card to NIC/DPU mode RequiresMountsFor=%t/containers Wants=network.target After=network-online.target kubelet.service [Service] SuccessExitStatus=0 120 RemainAfterExit=True ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic || shutdown -r now' 1 Type=oneshot [Install] WantedBy=multi-user.target"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/hardware-networks |
Chapter 74. trust | Chapter 74. trust This chapter describes the commands under the trust command. 74.1. trust create Create new trust Usage: Table 74.1. Positional arguments Value Summary <trustor-user> User that is delegating authorization (name or id) <trustee-user> User that is assuming authorization (name or id) Table 74.2. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Project being delegated (name or id) (required) --role <role> Roles to authorize (name or id) (repeat option to set multiple values, required) --impersonate Tokens generated from the trust will represent <trustor> (defaults to False) --expiration <expiration> Sets an expiration date for the trust (format of yyyy- mm-ddTHH:MM:SS) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --trustor-domain <trustor-domain> Domain that contains <trustor> (name or id) --trustee-domain <trustee-domain> Domain that contains <trustee> (name or id) Table 74.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 74.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 74.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 74.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 74.2. trust delete Delete trust(s) Usage: Table 74.7. Positional arguments Value Summary <trust> Trust(s) to delete Table 74.8. Command arguments Value Summary -h, --help Show this help message and exit 74.3. trust list List trusts Usage: Table 74.9. Command arguments Value Summary -h, --help Show this help message and exit --trustor <trustor-user> Trustor user to filter (name or id) --trustee <trustee-user> Trustee user to filter (name or id) --trustor-domain <trustor-domain> Domain that contains <trustor> (name or id) --trustee-domain <trustee-domain> Domain that contains <trustee> (name or id) --auth-user Only list trusts related to the authenticated user Table 74.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 74.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 74.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 74.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 74.4. trust show Display trust details Usage: Table 74.14. Positional arguments Value Summary <trust> Trust to display Table 74.15. Command arguments Value Summary -h, --help Show this help message and exit Table 74.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 74.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 74.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 74.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack trust create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --project <project> --role <role> [--impersonate] [--expiration <expiration>] [--project-domain <project-domain>] [--trustor-domain <trustor-domain>] [--trustee-domain <trustee-domain>] <trustor-user> <trustee-user>",
"openstack trust delete [-h] <trust> [<trust> ...]",
"openstack trust list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--trustor <trustor-user>] [--trustee <trustee-user>] [--trustor-domain <trustor-domain>] [--trustee-domain <trustee-domain>] [--auth-user]",
"openstack trust show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <trust>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/trust |
A.5. Investigating Why a Service Fails to Start | A.5. Investigating Why a Service Fails to Start Review the log for the service that fails to start. See Section C.2, "Identity Management Log Files and Directories" . For example, the log for Directory Server is at /var/log/dirsrv/slapd- IPA-EXAMPLE-COM /errors . Make sure that the server on which the service is running has a fully qualified domain name (FQDN). See the section called "Verifying the Server Host Name" . If the /etc/hosts file contains an entry for the server on which the service is running, make sure the fully qualified domain name is listed first. See also the section called "The /etc/hosts File" . Make sure you meet the other conditions in Section 2.1.5, "Host Name and DNS Configuration" . Determine what keys are included in the keytab that is used for authentication of the service. For example, for the dirsrv service ticket: Make sure that the displayed principals match the system's FQDN. Make sure that the displayed version of the keys (KVNO) in the above-mentioned service keytab match the KVNO in the server keytab. To display the server keytab: Verify that the forward (A, AAAA, or both) and reverse records on the client match the displayed system name and service principal. Verify that the forward (A, AAAA, or both) and reverse records on the client are correct. Make sure that the system time difference on the client and the server is 5 minutes at the most. Services can fail to start after the IdM administrative server certificates expire. To check if this is the cause in your case: Use the getcert list command to list all certificates tracked by the certmonger utility. In the output, find the IdM administrative certificates: the ldap and httpd server certificates. Examine the fields labeled status and expires . If you need to start the service even though the certificates are expired, see Section 26.5, "Allowing IdM to Start with Expired Certificates" . | [
"klist -kt /etc/dirsrv/ds.keytab Keytab name: FILE:/etc/dirsrv/ds.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 01/10/2017 14:54:39 ldap/[email protected] 2 01/10/2017 14:54:39 ldap/[email protected] [... output truncated ...]",
"kinit admin USD kvno ldap/ [email protected]",
"getcert list Number of certificates and requests being tracked: 8. [... output truncated ...] Request ID '20170421124617': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/dirsrv/slapd-IPA-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/dirsrv/slapd-IPA-EXAMPLE-COM/pwdfile.txt' certificate: type=NSSDB,location='/etc/dirsrv/slapd-IPA-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB' CA: IPA issuer: CN=Certificate Authority,O=IPA.EXAMPLE.COM subject: CN=ipa.example.com,O=IPA.EXAMPLE.COM expires: 2019-04-22 12:46:17 UTC [... output truncated ...] Request ID '20170421130535': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB' CA: IPA issuer: CN=Certificate Authority,O=IPA.EXAMPLE.COM subject: CN=ipa.example.com,O=IPA.EXAMPLE.COM expires: 2019-04-22 13:05:35 UTC [... output truncated ...]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-gen-service |
7.199. python | 7.199. python 7.199.1. RHBA-2013:0437 - python bug fix update Updated python packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Python is an interpreted, interactive, object-oriented programming language often compared to Tcl, Perl, Scheme, or Java. Python includes modules, classes, exceptions, very high level dynamic data types and dynamic typing. Python supports interfaces to many system calls and libraries, as well as to various windowing systems (X11, Motif, Tk, Mac and MFC). Bug Fixes BZ# 707944 Previously, applying the python-2.6.5-ctypes-noexecmem patch caused the ctypes.CFUNCTYPE() function to allocate memory in order to avoid running the process in a SELinux domain with the execmem permission. When this allocation process forked without using the exec() function (for example in a multi-processing module), the state of the allocator was shared between parent and child processes. This shared state caused unpredictable interactions between the processes, potentially leading to segmentation faults or lack of termination of a multiprocessing workload. With this update, python-2.6.5-ctypes-noexecmem has been reverted, and the unpredictable behavior no longer occurs. In addition, Python programs are now required to run within a SELinux domain with execmem permissions. BZ# 814391 Prior to this update, any usage of the ctypes module (such as via the "uuid" module used by the Django application framework) triggered the ctypes.CFUNCTYPE() function on module import. Consequently, if the process was missing SELinux permissions, AVC denial messages were returned. This bug has been fixed, and SELinux permissions are now required only in relevant cases of ctypes usage, such as passing a Python callable to a C callback. BZ# 810847 , BZ# 841748 In certain cases, enabled C-level assertions caused the python library to fail when building valid Python code. Consequently, code containing four or more nested "IF" statements within a list comprehension or generator expression failed to compile. Moreover, an error occurred when formatting certain numpy objects. With this update, the C-level assertions have been deactivated and the aforementioned problems no longer occur. BZ#833271 As part of the fix for CVE-2012-0876, a new symbol ("XML_SetHashSalt") was added to the system libexpat library, which Python standard library uses in the pyexpat module. If an unpatched libexpat.so.1 was present in a directory listed in LD_LIBRARY_PATH, then attempts to use the pyexpat module (for example from yum) would fail with an ImportError exception. This update adds an RPATH directive to pyexpat to ensure that libexpat is used by pyexpat, regardless of whether there is an unpatched libexpat within the LD_LIBRARY_PATH, thus preventing the ImportError exception. BZ# 835460 Due to a bug in the Python logging module, the SysLogHandler class continued to send log message against a closed connection. Consequently, an infinite loop occurred when SysLogHandler was used together with the Eventlet library. The bug has been fixed, and the described issue no longer occurs. All users of python are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/python |
Chapter 5. Shutting down and starting up RHOSO nodes | Chapter 5. Shutting down and starting up RHOSO nodes To perform maintenance on your Red Hat OpenStack on OpenShift (RHOSO) environment, you must shut down and start up the Red Hat OpenShift Container Platform (RHOCP) cluster and all the data plane nodes in a specific order to ensure minimal issues when you restart your cluster and data plane nodes. Prerequisites An operational RHOSO environment. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. The oc command line tool is installed on the workstation. 5.1. RHOSO deployment shutdown order To shut down the Red Hat OpenStack on OpenShift (RHOSO) environment, you must shut down the instances that host the workload, the data plane nodes, and the Red Hat OpenShift Container Platform (RHOCP) cluster nodes in the following order: Shut down instances hosted on the Compute nodes on the data plane. If your data plane includes hyperconverged infrastructure (HCI) nodes, shut down the Red Hat Ceph Storage cluster. Shut down Compute nodes. Shut down the RHOCP cluster nodes. 5.2. Shutting down instances hosted on the Compute nodes In order to shut down the {rhoso_long} environment, you must first shut down all instances hosted on Compute nodes before shutting down the Compute nodes. Procedure Access the remote shell for the OpenStackClient pod from your workstation: List all running instances: Stop each instance: Repeat this step for each instance until you stop all running instances. 5.3. Shutting down the Red Hat Ceph Storage cluster for HCI environments If your data plane includes hyperconverged infrastructure (HCI) nodes, shut down the Red Hat Ceph Storage cluster. For more information about how to shut down the Red Hat Ceph Storage cluster, see Powering down and rebooting the cluster using the Ceph Orchestrator in the Red Hat Ceph Storage Administration guide. 5.4. Shutting down Compute nodes As a part of shutting down the {rhoso_long} environment, log in to and shut down each Compute node. Prerequisites You have stopped all instances hosted on the Compute nodes. Procedure Retrieve a list of the Compute nodes: Log in as the root user to a Compute node and shut down the node: Repeat this step for each Compute node until you shut down all Compute nodes. Exit the OpenStackClient pod: 5.5. Shutting down the RHOCP cluster As a part of shutting down the Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must shut down the Red Hat OpenShift Container Platform (RHOCP) cluster that hosts the RHOSO environment. For information about how to shut down a RHOCP cluster, see Shutting down the cluster gracefully in the RHOCP Backup and restore guide. 5.6. RHOSO deployment startup order To start the {rhoso_long} environment, you must start the Red Hat OpenShift Container Platform (RHOCP) cluster and data plane nodes in the following order: Start the RHOCP cluster. If your data plane includes hyperconverged infrastructure (HCI) nodes, start up the Red Hat Ceph Storage cluster. Start Compute nodes. Start instances on the Compute nodes. 5.7. Starting the RHOCP cluster As a part of starting up the {rhoso_long} environment, you must start the Red Hat OpenShift Container Platform (RHOCP) cluster that hosts the RHOSO environment. For information about how to start up a RHOCP cluster, see Restarting the cluster gracefully in the RHOCP Backup and restore guide. 5.8. Starting the Red Hat Ceph Storage cluster for HCI environments If your data plane includes hyperconverged infrastructure (HCI) nodes, you must use the cephadm utility to unset the noout , norecover , norebalance , nobackfill , and nodown properties, and pause flagsstart . For more information about how to start the Red Hat Ceph Storage cluster, see Powering down and rebooting the cluster using the Ceph Orchestrator in the Red Hat Ceph Storage Administration guide. 5.9. Starting Compute nodes As a part of starting the {rhoso_long} environment, power on each Compute node and check the services on the node. Prerequisites Powered down Compute nodes. Procedure Power on each Compute node. Verification Log in to each Compute as the root user. Check the services on the Compute node: 5.10. Starting instances on Compute nodes As a part of starting the {rhoso_long} environment, start the instances on the Compute nodes. Procedure Access the remote shell for the OpenStackClient pod from your workstation: List all the instances: Start an instance: Repeat this step for each instance until you start all the instances. Exit the OpenStackClient pod: | [
"oc rsh -n openstack openstackclient",
"openstack server list --all-projects",
"openstack server stop <instance_UUID>",
"openstack compute service list",
"shutdown -h now",
"exit",
"systemctl -t service",
"oc rsh -n openstack openstackclient",
"openstack server list --all-projects",
"openstack server start <instance_UUID>",
"exit"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/maintaining_the_red_hat_openstack_services_on_openshift_deployment/assembly_shutting-down-and-starting-up-rhoso-nodes |
Chapter 3. Major differences between Red Hat build of OpenJDK 11 and Red Hat build of OpenJDK 17 | Chapter 3. Major differences between Red Hat build of OpenJDK 11 and Red Hat build of OpenJDK 17 Before migrating your Java applications from Red Hat build of OpenJDK version 8 or 11 to Red Hat build of OpenJDK 17, familarize yourself with the changes in Red Hat build of OpenJDK 17. These changes might require that you reconfigure your existing Red Hat build of OpenJDK installation before you migrate to version 17. 3.1. Removal of Concurrent Mark Sweep garbage collector Red Hat build of OpenJDK 17 no longer includes the Concurrent Mark Sweep (CMS) garbage collector, which was commonly used in earlier releases for workloads sensitive to pause times and latency. If you have been using the CMS collector, switch to one of the following collectors based on your workload before migrating to Red Hat build of OpenJDK 17 or later. The Garbage-First (G1) collector balances performance and latency. G1 is a generational collector that offers a high ephemeral object allocation rate with typical pause times of a few hundred milliseconds. G1 is enabled by default, but you can manually enable this collector by setting the -XX:+UseG1GC JVM option. The Shenandoah collector is a low-latency collector with typical pause times of a few milliseconds. Shenandoah is not a generational collector and might exhibit worse ephemeral object allocation rates than the G1 collector. If you want to enable the Shenandoah collector, set the -XX:+UseShenandoahGC JVM option. The Z Garbage Collector (ZGC) is another low-latency collector. Unlike the Shenandoah collector, ZGC does not support compressed ordinary object pointers (OOPs) (that is, heap references). Compressed OOPs help to save heap memory and improve performance for heap sizes up to 32 GB. This means that ZGC might exhibit worse resident memory sizes than the Shenandoah collector, especially on small heap sizes. If you want to enable the ZGC collector, set the -XX:+UseZGC JVM option. For more information, see JEP 363: Remove the Concurrent Mark Sweep (CMS) Garbage Collector . 3.2. Removal of pack200 tools and API Red Hat build of OpenJDK 17 no longer includes any of the following features: The pack200 tool The unpack200 tool The java.util.jar.Pack200 API The java.util.jar.Pack200.Packer API The java.util.jar.Pack200.Unpacker API The use of these tools and APIs has been limited since the introduction of the JMOD module format in OpenJDK 9. For more information, see JEP 367: Remove the Pack200 Tools and API . 3.3. Removal of Nashorn JavaScript engine Red Hat build of OpenJDK 17 no longer includes any of the following features: The Nashorn JavaScript engine The jjs command-line tool The jdk.scripting.nashorn module The jdk.scripting.nashorn.shell module The scripting API, javax.script , is still available in Red Hat build of OpenJDK 17 or later. Similar to releases before OpenJDK 8, you can use the javax.script API with a JavaScript engine of your choice, such as Rhino or the now externally maintained Nashorn JavaScript engine. For more information, see JEP 372: Remove the Nashorn JavaScript Engine . 3.4. Strong encapsulation of JDK internal elements Red Hat build of OpenJDK 17 introduces strong encapsulation of all internal elements of the JDK, apart from critical internal APIs such as sun.misc.Unsafe . From Red Hat build of OpenJDK 17 onward, you cannot relax the strong encapsulation of internal elements by using a single command-line option. This means that Red Hat build of OpenJDK 17 and later versions prevent reflective access to JDK internal types apart from critical internal APIs. For more information, see JEP 403: Strongly Encapsulate JDK Internals . 3.5. Biased locking disabled by default Red Hat build of OpenJDK 17 disables biased locking by default. In Red Hat build of OpenJDK 17, you can enable biased locking by setting the -XX:+UseBiasedLocking JVM option at startup. However, the -XX:+UseBiasedLocking option is deprecated in Red Hat build of OpenJDK 17 and planned for removal in OpenJDK 18. For more information, see JEP 374: Deprecate and Disable Biased Locking . 3.6. Removal of RMI activation Red Hat build of OpenJDK 17 removes the java.rmi.activation package and its associated rmid activation daemon for Java remote method invocation (RMI). Other RMI features are still available in Red Hat build of OpenJDK 17 and later versions. For more information, see JEP 407: Remove RMI Activation . 3.7. Removal of the Graal compiler Red Hat build of OpenJDK 17 removes the Graal compiler, which comprises the jaotc tool and the jdk.internal.vm.compiler and jdk.internal.vm.compiler.management modules. From Red Hat build of OpenJDK 17 onward, if you want to use ahead-of-time (AOT) compilation, you can use GraalVM. For more information, see JEP 410: Remove the Experimental AOT and JIT Compiler . 3.8. Additional resources (or steps) OpenJDK: JEPs in JDK 17 integrated since JDK 11 | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/migrating_to_red_hat_build_of_openjdk_17_from_earlier_versions/differences_11_17 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.