title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
5.89. gnome-packagekit | 5.89. gnome-packagekit 5.89.1. RHBA-2012:1229 - gnome-packagekit bug fix update Updated gnome-packagekit packages that fix a bug are now available for Red Hat Enterprise Linux 6. The gnome-packagekit packages provide session applications for the PackageKit API. Bug Fix BZ# 839197 Previously, it was possible for the user to log out of the system or shut it down while the PackageKit update tool was running and writing to the RPM database (rpmdb). Consequently, rpmdb could become damaged and inconsistent due to the unexpected termination and cause various problems with subsequent operation of the rpm, yum, and PackageKit utilities. This update modifies PackageKit to not allow shutting down the system when a transaction writing to rpmdb is active, thus fixing this bug. Users of gnome-packagekit are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/gnome-packagekit |
Chapter 12. Listeners and notifications | Chapter 12. Listeners and notifications Use listeners with Data Grid to get notifications when events occur for the Cache Manager or for caches. 12.1. Listeners and notifications Data Grid offers a listener API, where clients can register for and get notified when events take place. This annotation-driven API applies to 2 different levels: cache level events and Cache Manager level events. Events trigger a notification which is dispatched to listeners. Listeners are simple POJO s annotated with @Listener and registered using the methods defined in the Listenable interface. Both Cache and CacheManager implement Listenable, which means you can attach listeners to either a cache or a Cache Manager, to receive either cache-level or Cache Manager-level notifications. For example, the following class defines a listener to print out some information every time a new entry is added to the cache, in a non blocking fashion: @Listener public class PrintWhenAdded { Queue<CacheEntryCreatedEvent> events = new ConcurrentLinkedQueue<>(); @CacheEntryCreated public CompletionStage<Void> print(CacheEntryCreatedEvent event) { events.add(event); return null; } } For more comprehensive examples, please see the Javadocs for @Listener . 12.2. Cache-level notifications Cache-level events occur on a per-cache basis, and by default are only raised on nodes where the events occur. Note in a distributed cache these events are only raised on the owners of data being affected. Examples of cache-level events are entries being added, removed, modified, etc. These events trigger notifications to listeners registered to a specific cache. Please see the Javadocs on the org.infinispan.notifications.cachelistener.annotation package for a comprehensive list of all cache-level notifications, and their respective method-level annotations. Note Please refer to the Javadocs on the org.infinispan.notifications.cachelistener.annotation package for the list of cache-level notifications available in Data Grid. Cluster listeners The cluster listeners should be used when it is desirable to listen to the cache events on a single node. To do so all that is required is set to annotate your listener as being clustered. @Listener (clustered = true) public class MyClusterListener { .... } There are some limitations to cluster listeners from a non clustered listener. A cluster listener can only listen to @CacheEntryModified , @CacheEntryCreated , @CacheEntryRemoved and @CacheEntryExpired events. Note this means any other type of event will not be listened to for this listener. Only the post event is sent to a cluster listener, the pre event is ignored. Event filtering and conversion All applicable events on the node where the listener is installed will be raised to the listener. It is possible to dynamically filter what events are raised by using a KeyFilter (only allows filtering on keys) or CacheEventFilter (used to filter for keys, old value, old metadata, new value, new metadata, whether command was retried, if the event is before the event (ie. isPre) and also the command type). The example here shows a simple KeyFilter that will only allow events to be raised when an event modified the entry for the key Only Me . public class SpecificKeyFilter implements KeyFilter<String> { private final String keyToAccept; public SpecificKeyFilter(String keyToAccept) { if (keyToAccept == null) { throw new NullPointerException(); } this.keyToAccept = keyToAccept; } public boolean accept(String key) { return keyToAccept.equals(key); } } ... cache.addListener(listener, new SpecificKeyFilter("Only Me")); ... This can be useful when you want to limit what events you receive in a more efficient manner. There is also a CacheEventConverter that can be supplied that allows for converting a value to another before raising the event. This can be nice to modularize any code that does value conversions. Note The mentioned filters and converters are especially beneficial when used in conjunction with a Cluster Listener. This is because the filtering and conversion is done on the node where the event originated and not on the node where event is listened to. This can provide benefits of not having to replicate events across the cluster (filter) or even have reduced payloads (converter). Initial State Events When a listener is installed it will only be notified of events after it is fully installed. It may be desirable to get the current state of the cache contents upon first registration of listener by having an event generated of type @CacheEntryCreated for each element in the cache. Any additionally generated events during this initial phase will be queued until appropriate events have been raised. Note This only works for clustered listeners at this time. ISPN-4608 covers adding this for non clustered listeners. Duplicate Events It is possible in a non transactional cache to receive duplicate events. This is possible when the primary owner of a key goes down while trying to perform a write operation such as a put. Data Grid internally will rectify the put operation by sending it to the new primary owner for the given key automatically, however there are no guarantees in regards to if the write was first replicated to backups. Thus more than 1 of the following write events ( CacheEntryCreatedEvent , CacheEntryModifiedEvent & CacheEntryRemovedEvent ) may be sent on a single operation. If more than one event is generated Data Grid will mark the event that it was generated by a retried command to help the user to know when this occurs without having to pay attention to view changes. @Listener public class MyRetryListener { @CacheEntryModified public void entryModified(CacheEntryModifiedEvent event) { if (event.isCommandRetried()) { // Do something } } } Also when using a CacheEventFilter or CacheEventConverter the EventType contains a method isRetry to tell if the event was generated due to retry. 12.3. Cache Manager notifications Events that occur on a Cache Manager level are cluster-wide and involve events that affect all caches created by a single Cache Manager. Examples of Cache Manager events are nodes joining or leaving a cluster, or caches starting or stopping. See the org.infinispan.notifications.cachemanagerlistener.annotation package for a comprehensive list of all Cache Manager notifications, and their respective method-level annotations. 12.4. Synchronicity of events By default, all async notifications are dispatched in the notification thread pool. Sync notifications will delay the operation from continuing until the listener method completes or the CompletionStage completes (the former causing the thread to block). Alternatively, you could annotate your listener as asynchronous in which case the operation will continue immediately, while the notification is completed asynchronously on the notification thread pool. To do this, simply annotate your listener such: Asynchronous Listener @Listener (sync = false) public class MyAsyncListener { @CacheEntryCreated void listen(CacheEntryCreatedEvent event) { } } Blocking Synchronous Listener @Listener public class MySyncListener { @CacheEntryCreated void listen(CacheEntryCreatedEvent event) { } } Non-Blocking Listener @Listener public class MyNonBlockingListener { @CacheEntryCreated CompletionStage<Void> listen(CacheEntryCreatedEvent event) { } } Asynchronous thread pool To tune the thread pool used to dispatch such asynchronous notifications, use the <listener-executor /> XML element in your configuration file. | [
"@Listener public class PrintWhenAdded { Queue<CacheEntryCreatedEvent> events = new ConcurrentLinkedQueue<>(); @CacheEntryCreated public CompletionStage<Void> print(CacheEntryCreatedEvent event) { events.add(event); return null; } }",
"@Listener (clustered = true) public class MyClusterListener { .... }",
"public class SpecificKeyFilter implements KeyFilter<String> { private final String keyToAccept; public SpecificKeyFilter(String keyToAccept) { if (keyToAccept == null) { throw new NullPointerException(); } this.keyToAccept = keyToAccept; } public boolean accept(String key) { return keyToAccept.equals(key); } } cache.addListener(listener, new SpecificKeyFilter(\"Only Me\"));",
"@Listener public class MyRetryListener { @CacheEntryModified public void entryModified(CacheEntryModifiedEvent event) { if (event.isCommandRetried()) { // Do something } } }",
"@Listener (sync = false) public class MyAsyncListener { @CacheEntryCreated void listen(CacheEntryCreatedEvent event) { } }",
"@Listener public class MySyncListener { @CacheEntryCreated void listen(CacheEntryCreatedEvent event) { } }",
"@Listener public class MyNonBlockingListener { @CacheEntryCreated CompletionStage<Void> listen(CacheEntryCreatedEvent event) { } }"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/configuring_data_grid_caches/listeners-notifications |
Chapter 4. Exporting inventory data | Chapter 4. Exporting inventory data You can use the export service for inventory to export a list of systems and their data from your Insights inventory. You can specify CSV or JSON as the output format. The export process takes place asynchronously, so it runs in the background. The service is available in both the Insights UI and through the export service API. The exported content includes the following information about each system in your inventory: host_id fqdn (Fully Qualified Domain Name) display_name group_id group_name state os_release updated subscription_manager_id satellite_id tags host_type Note The export service currently exports information about all systems in your inventory. Support for filters will be available in a future release. The Inventory export service works differently from the export function in other services, such as Advisor. Some of the differences are: Inventory export operates asynchronously Exports the entire inventory to one continuous file (no pagination in the export file) Retains generated files for 7 days Uses token-based service accounts for authorization if using the export service API Important Your RBAC permissions affect the system information you can export. You must have inventory:hosts:read permission for a system to export system information. 4.1. Inventory data files The inventory export process creates and downloads a zip file. The zip file contains the following files: id .suffix - the export data file, with the file name format of id .json for JSON files, or id .csv for CSV files. For example: f26a57ac-1efc-4831-9c26-c818b6060ddf.json README.md - the export manifest for the JSON/CSV file, which lists the downloaded files, any errors, and instructions for obtaining help meta.json - describes the export operation - requestor, date, Organization ID, and file metadata (such as the filename of the JSON/CSV file) 4.2. Exporting system inventory from the Insights UI You can export inventory data from the Insights UI. The inventory data export service works differently from the export service for other Insights services, such as Advisor. Prerequisites RBAC permissions for the systems you want to view and export Inventory:hosts:read (inventory:hosts:read * for all systems in inventory) A User Access role for workspaces. For more information about User Access roles, see User access to workspaces . Procedure Navigate to Inventory > Systems. The list of systems displays. Click the Export icon to the options icon (...). The drop-down menu displays. Select CSV or JSON as the export format. A status message displays: Preparing export. Once complete, your download will start automatically. When the download completes, a browser window automatically opens to display the results. If you remain on the Systems page after requesting the download, status messages from Insights appear with updates on the progress of the export operation. 4.3. Exporting system inventory using the export API You can use the Export API to export your inventory data. Use the REST API entry point: console.redhat.com/api/export/v1 . The Export Service API supports the GET, POST, and DELETE HTTP methods. The API offers the following services: POST /exports GET /exports GET /exports/ id DELETE /exports/ id GET /exports/ id /status The API works asynchronously. You can submit the POST /exports request for export from the Export API and receive a reply with an ID for that export. You can then use that ID to monitor the progress of the export operation with the GET /exports/ id /status request. When the generated export is complete, you can download it (GET /exports/ id ) or delete it (DELETE /exports/ id ). Successful requests return the following responses: 200 - Success 202 - Successfully deleted (for the DELETE method) For more information about the operations, schemas, and objects, see Consoledot Export Service . 4.3.1. Requesting the system inventory export Before you can request the exported data file, you need to obtain a unique ID for the download. To obtain the ID, issue a POST request. The server returns a response that includes the ID. Use the ID in any request that requires the id parameter, such as GET /exports/ id . Prerequisites Token-based service account with the appropriate permissions for your systems RBAC permissions for the systems you want to view and export Inventory:hosts:read (inventory:hosts:read * for all systems in inventory) A User Access role for workspaces. For more information about User Access roles, see User access to workspaces . Procedure Create a request for the export service, or use this sample request code: { "name": "Inventory Export", "format": "json", "sources": [ { "application": "urn:redhat:application:inventory", "resource": "urn:redhat:application:inventory:export:systems" } ] } Note You can request CSV or JSON as your export format. In the Hybrid Cloud Console, navigate to the API documentation: https://console.redhat.com/docs/api/export . Note You can use the API documentation to experiment and run queries against the API before writing your own custom client and/or use the APIs in your automation. Select POST /export. Remove the existing sample code in the Request Body window and paste the request code into the window. Click Execute . This request initiates the export process. The curl request and server response appear, along with the result codes for the POST operation. Look for the id field in the server response. Copy and save the string value for id . Use this value for id in your requests. Optional. Issue the GET /exports request. The server returns the curl request, request URL, and response codes. Optional. To request the status of the export request, issue the GET /exports/ id /status request. When the export has completed, issue the GET /exports/ id request, with the ID string that you copied in place of id . The server returns a link to download the export file (the payload). Click Download File . When the download completes, a notification message appears in your browser. Click the browser notification to locate the downloaded zip file. Note The server retains export files for 7 days. 4.3.2. Deleting export files To delete exported files, issue the DELETE /exports/ id request. Additional resources Knowledge Base article about inventory export: Ability to export a list of registered inventory systems Export service API for multiple sources: https://developers.redhat.com/api-catalog/api/export-service Export service API doc within the console: https://console.redhat.com/docs/api/export For the latest OpenAPI specifications, see https://swagger.io/specification/ 4.3.3. Automating inventory export using Ansible playbooks You can use an Ansible playbook to automate the inventory export process. The playbook is a generic playbook for the export service that uses token-based service accounts for authentication. Procedure Navigate to https://github.com/jeromemarc/insights-inventory-export . Download the inventory-export.yml playbook. Run the playbook. The playbook does everything from requesting the export id , to requesting download status, to requesting the downloaded payload. Additional resources For more information about service accounts, refer to the KB article: Transition of Red Hat Hybrid Cloud Console APIs from basic authentication to token-based authentication via service accounts . 4.3.4. Using the inventory export service for multiple Insights services You can use the inventory export service for multiple services, such as inventory and notifications. To request multiple services, include source information for each service that you want to request in your POST /exports request. For example: { "name": "Inventory Export multiple sources", "format": "json", "sources": [ { "application": "urn:redhat:application:inventory", "resource": "urn:redhat:application:inventory:export:systems", "filters": {} }, { "application": "urn:redhat:application:notifications", "resource": "urn:redhat:application:notifications:export:events", "filters": {} } ] } The POST /exports request returns a unique id for each export. The GET /exports request returns a zip file that includes multiple JSON or CSV files, one for each service that you request. | [
"{ \"name\": \"Inventory Export\", \"format\": \"json\", \"sources\": [ { \"application\": \"urn:redhat:application:inventory\", \"resource\": \"urn:redhat:application:inventory:export:systems\" } ] }",
"{ \"name\": \"Inventory Export multiple sources\", \"format\": \"json\", \"sources\": [ { \"application\": \"urn:redhat:application:inventory\", \"resource\": \"urn:redhat:application:inventory:export:systems\", \"filters\": {} }, { \"application\": \"urn:redhat:application:notifications\", \"resource\": \"urn:redhat:application:notifications:export:events\", \"filters\": {} } ] }"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/viewing_and_managing_system_inventory/assembly-exporting-inventory-data_user-access |
Images | Images OpenShift Container Platform 4.17 Creating and managing images and imagestreams in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/images/index |
Chapter 10. SSO protocols | Chapter 10. SSO protocols This section discusses authentication protocols, the Red Hat build of Keycloak authentication server and how applications, secured by the Red Hat build of Keycloak authentication server, interact with these protocols. 10.1. OpenID Connect OpenID Connect (OIDC) is an authentication protocol that is an extension of OAuth 2.0 . OAuth 2.0 is a framework for building authorization protocols and is incomplete. OIDC, however, is a full authentication and authorization protocol that uses the Json Web Token (JWT) standards. The JWT standards define an identity token JSON format and methods to digitally sign and encrypt data in a compact and web-friendly way. In general, OIDC implements two use cases. The first case is an application requesting that a Red Hat build of Keycloak server authenticates a user. Upon successful login, the application receives an identity token and an access token . The identity token contains user information including user name, email, and profile information. The realm digitally signs the access token which contains access information (such as user role mappings) that applications use to determine the resources users can access in the application. The second use case is a client accessing remote services. The client requests an access token from Red Hat build of Keycloak to invoke on remote services on behalf of the user. Red Hat build of Keycloak authenticates the user and asks the user for consent to grant access to the requesting client. The client receives the access token which is digitally signed by the realm. The client makes REST requests on remote services using the access token . The remote REST service extracts the access token . The remote REST service verifies the tokens signature. The remote REST service decides, based on access information within the token, to process or reject the request. 10.1.1. OIDC auth flows OIDC has several methods, or flows, that clients or applications can use to authenticate users and receive identity and access tokens. The method depends on the type of application or client requesting access. 10.1.1.1. Authorization Code Flow The Authorization Code Flow is a browser-based protocol and suits authenticating and authorizing browser-based applications. It uses browser redirects to obtain identity and access tokens. A user connects to an application using a browser. The application detects the user is not logged into the application. The application redirects the browser to Red Hat build of Keycloak for authentication. The application passes a callback URL as a query parameter in the browser redirect. Red Hat build of Keycloak uses the parameter upon successful authentication. Red Hat build of Keycloak authenticates the user and creates a one-time, short-lived, temporary code. Red Hat build of Keycloak redirects to the application using the callback URL and adds the temporary code as a query parameter in the callback URL. The application extracts the temporary code and makes a background REST invocation to Red Hat build of Keycloak to exchange the code for an identity and access and refresh token. To prevent replay attacks, the temporary code cannot be used more than once. Note A system is vulnerable to a stolen token for the lifetime of that token. For security and scalability reasons, access tokens are generally set to expire quickly so subsequent token requests fail. If a token expires, an application can obtain a new access token using the additional refresh token sent by the login protocol. Confidential clients provide client secrets when they exchange the temporary codes for tokens. Public clients are not required to provide client secrets. Public clients are secure when HTTPS is strictly enforced and redirect URIs registered for the client are strictly controlled. HTML5/JavaScript clients have to be public clients because there is no way to securely transmit the client secret to HTML5/JavaScript clients. For more details, see the Managing Clients chapter. Red Hat build of Keycloak also supports the Proof Key for Code Exchange specification. 10.1.1.2. Implicit Flow The Implicit Flow is a browser-based protocol. It is similar to the Authorization Code Flow but with fewer requests and no refresh tokens. Note The possibility exists of access tokens leaking in the browser history when tokens are transmitted via redirect URIs (see below). Also, this flow does not provide clients with refresh tokens. Therefore, access tokens have to be long-lived or users have to re-authenticate when they expire. We do not advise using this flow. This flow is supported because it is in the OIDC and OAuth 2.0 specification. The protocol works as follows: A user connects to an application using a browser. The application detects the user is not logged into the application. The application redirects the browser to Red Hat build of Keycloak for authentication. The application passes a callback URL as a query parameter in the browser redirect. Red Hat build of Keycloak uses the query parameter upon successful authentication. Red Hat build of Keycloak authenticates the user and creates an identity and access token. Red Hat build of Keycloak redirects to the application using the callback URL and additionally adds the identity and access tokens as a query parameter in the callback URL. The application extracts the identity and access tokens from the callback URL. 10.1.1.3. Resource owner password credentials grant (Direct Access Grants) Direct Access Grants are used by REST clients to obtain tokens on behalf of users. It is a HTTP POST request that contains: The credentials of the user. The credentials are sent within form parameters. The id of the client. The clients secret (if it is a confidential client). The HTTP response contains the identity , access , and refresh tokens. 10.1.1.4. Client credentials grant The Client Credentials Grant creates a token based on the metadata and permissions of a service account associated with the client instead of obtaining a token that works on behalf of an external user. Client Credentials Grants are used by REST clients. See the Service Accounts chapter for more information. 10.1.1.5. Refresh token grant By default, Red Hat build of Keycloak returns refresh tokens in the token responses from most of the flows. Some exceptions are implicit flow or client credentials grant described above. Refresh token is tied to the user session of the SSO browser session and can be valid for the lifetime of the user session. However, that client should send a refresh-token request at least once per specified interval. Otherwise, the session can be considered "idle" and can expire. See the timeouts section for more information. Red Hat build of Keycloak supports offline tokens , which can be used typically when client needs to use refresh token even if corresponding browser SSO session is already expired. 10.1.1.5.1. Refresh token rotation It is possible to specify that the refresh token is considered invalid once it is used. This means that client must always save the refresh token from the last refresh response because older refresh tokens, which were already used, would not be considered valid anymore by Red Hat build of Keycloak. This is possible to set with the use of Revoke Refresh token option as specified in the timeouts section . Red Hat build of Keycloak also supports the situation that no refresh token rotation exists. In this case, a refresh token is returned during login, but subsequent responses from refresh-token requests will not return new refresh tokens. This practice is recommended for instance in the FAPI 2 draft specification in the securing apps section. In Red Hat build of Keycloak, it is possible to skip refresh token rotation with the use of client policies . You can add executor suppress-refresh-token-rotation to some client profile and configure client policy to specify for which clients would be the profile triggered, which means that for those clients the refresh token rotation is going to be skipped. 10.1.1.6. Device authorization grant This is used by clients running on internet-connected devices that have limited input capabilities or lack a suitable browser. Here's a brief summary of the protocol: The application requests Red Hat build of Keycloak a device code and a user code. Red Hat build of Keycloak creates a device code and a user code. Red Hat build of Keycloak returns a response including the device code and the user code to the application. The application provides the user with the user code and the verification URI. The user accesses a verification URI to be authenticated by using another browser. You could define a short verification_uri that will be redirected to Red Hat build of Keycloak verification URI (/realms/realm_name/device)outside Red Hat build of Keycloak - fe in a proxy. The application repeatedly polls Red Hat build of Keycloak to find out if the user completed the user authorization. If user authentication is complete, the application exchanges the device code for an identity , access and refresh token. 10.1.1.7. Client initiated backchannel authentication grant This feature is used by clients who want to initiate the authentication flow by communicating with the OpenID Provider directly without redirect through the user's browser like OAuth 2.0's authorization code grant. Here's a brief summary of the protocol: The client requests Red Hat build of Keycloak an auth_req_id that identifies the authentication request made by the client. Red Hat build of Keycloak creates the auth_req_id. After receiving this auth_req_id, this client repeatedly needs to poll Red Hat build of Keycloak to obtain an Access Token, Refresh Token and ID Token from Red Hat build of Keycloak in return for the auth_req_id until the user is authenticated. An administrator can configure Client Initiated Backchannel Authentication (CIBA) related operations as CIBA Policy per realm. Also please refer to other places of Red Hat build of Keycloak documentation like Backchannel Authentication Endpoint and Client Initiated Backchannel Authentication Grant in the securing apps section. 10.1.1.7.1. CIBA Policy An administrator carries out the following operations on the Admin Console : Open the Authentication CIBA Policy tab. Configure items and click Save . The configurable items and their description follow. Configuration Description Backchannel Token Delivery Mode Specifying how the CD (Consumption Device) gets the authentication result and related tokens. There are three modes, "poll", "ping" and "push". Red Hat build of Keycloak only supports "poll". The default setting is "poll". This configuration is required. For more details, see CIBA Specification . Expires In The expiration time of the "auth_req_id" in seconds since the authentication request was received. The default setting is 120. This configuration is required. For more details, see CIBA Specification . Interval The interval in seconds the CD (Consumption Device) needs to wait for between polling requests to the token endpoint. The default setting is 5. This configuration is optional. For more details, see CIBA Specification . Authentication Requested User Hint The way of identifying the end-user for whom authentication is being requested. The default setting is "login_hint". There are three modes, "login_hint", "login_hint_token" and "id_token_hint". Red Hat build of Keycloak only supports "login_hint". This configuration is required. For more details, see CIBA Specification . 10.1.1.7.2. Provider Setting The CIBA grant uses the following two providers. Authentication Channel Provider: provides the communication between Red Hat build of Keycloak and the entity that actually authenticates the user via AD (Authentication Device). User Resolver Provider: get UserModel of Red Hat build of Keycloak from the information provided by the client to identify the user. Red Hat build of Keycloak has both default providers. However, the administrator needs to set up Authentication Channel Provider like this: kc.[sh|bat] start --spi-ciba-auth-channel-ciba-http-auth-channel-http-authentication-channel-uri=https://backend.internal.example.com The configurable items and their description follow. Configuration Description http-authentication-channel-uri Specifying URI of the entity that actually authenticates the user via AD (Authentication Device). 10.1.1.7.3. Authentication Channel Provider CIBA standard document does not specify how to authenticate the user by AD. Therefore, it might be implemented at the discretion of products. Red Hat build of Keycloak delegates this authentication to an external authentication entity. To communicate with the authentication entity, Red Hat build of Keycloak provides Authentication Channel Provider. Its implementation of Red Hat build of Keycloak assumes that the authentication entity is under the control of the administrator of Red Hat build of Keycloak so that Red Hat build of Keycloak trusts the authentication entity. It is not recommended to use the authentication entity that the administrator of Red Hat build of Keycloak cannot control. Authentication Channel Provider is provided as SPI provider so that users of Red Hat build of Keycloak can implement their own provider in order to meet their environment. Red Hat build of Keycloak provides its default provider called HTTP Authentication Channel Provider that uses HTTP to communicate with the authentication entity. If a user of Red Hat build of Keycloak user want to use the HTTP Authentication Channel Provider, they need to know its contract between Red Hat build of Keycloak and the authentication entity consisting of the following two parts. Authentication Delegation Request/Response Red Hat build of Keycloak sends an authentication request to the authentication entity. Authentication Result Notification/ACK The authentication entity notifies the result of the authentication to Red Hat build of Keycloak. Authentication Delegation Request/Response consists of the following messaging. Authentication Delegation Request The request is sent from Red Hat build of Keycloak to the authentication entity to ask it for user authentication by AD. Headers Name Value Description Content-Type application/json The message body is json formatted. Authorization Bearer [token] The [token] is used when the authentication entity notifies the result of the authentication to Red Hat build of Keycloak. Parameters Type Name Description Path delegation_reception The endpoint provided by the authentication entity to receive the delegation request Body Name Description login_hint It tells the authentication entity who is authenticated by AD. By default, it is the user's "username". This field is required and was defined by CIBA standard document. scope It tells which scopes the authentication entity gets consent from the authenticated user. This field is required and was defined by CIBA standard document. is_consent_required It shows whether the authentication entity needs to get consent from the authenticated user about the scope. This field is required. binding_message Its value is intended to be shown in both CD and AD's UI to make the user recognize that the authentication by AD is triggered by CD. This field is optional and was defined by CIBA standard document. acr_values It tells the requesting Authentication Context Class Reference from CD. This field is optional and was defined by CIBA standard document. Authentication Delegation Response The response is returned from the authentication entity to Red Hat build of Keycloak to notify that the authentication entity received the authentication request from Red Hat build of Keycloak. Responses HTTP Status Code Description 201 It notifies Red Hat build of Keycloak of receiving the authentication delegation request. Authentication Result Notification/ACK consists of the following messaging. Authentication Result Notification The authentication entity sends the result of the authentication request to Red Hat build of Keycloak. Headers Name Value Description Content-Type application/json The message body is json formatted. Authorization Bearer [token] The [token] must be the one the authentication entity has received from Red Hat build of Keycloak in Authentication Delegation Request. Parameters Type Name Description Path realm The realm name Body Name Description status It tells the result of user authentication by AD. It must be one of the following status. SUCCEED : The authentication by AD has been successfully completed. UNAUTHORIZED : The authentication by AD has not been completed. CANCELLED : The authentication by AD has been cancelled by the user. Authentication Result ACK The response is returned from Red Hat build of Keycloak to the authentication entity to notify Red Hat build of Keycloak received the result of user authentication by AD from the authentication entity. Responses HTTP Status Code Description 200 It notifies the authentication entity of receiving the notification of the authentication result. 10.1.1.7.4. User Resolver Provider Even if the same user, its representation may differ in each CD, Red Hat build of Keycloak and the authentication entity. For CD, Red Hat build of Keycloak and the authentication entity to recognize the same user, this User Resolver Provider converts their own user representations among them. User Resolver Provider is provided as SPI provider so that users of Red Hat build of Keycloak can implement their own provider in order to meet their environment. Red Hat build of Keycloak provides its default provider called Default User Resolver Provider that has the following characteristics. Only support login_hint parameter and is used as default. username of UserModel in Red Hat build of Keycloak is used to represent the user on CD, Red Hat build of Keycloak and the authentication entity. 10.1.2. OIDC Logout OIDC has four specifications relevant to logout mechanisms: Session Management RP-Initiated Logout Front-Channel Logout Back-Channel Logout Again since all of this is described in the OIDC specification we will only give a brief overview here. 10.1.2.1. Session Management This is a browser-based logout. The application obtains session status information from Red Hat build of Keycloak at a regular basis. When the session is terminated at Red Hat build of Keycloak the application will notice and trigger its own logout. 10.1.2.2. RP-Initiated Logout This is also a browser-based logout where the logout starts by redirecting the user to a specific endpoint at Red Hat build of Keycloak. This redirect usually happens when the user clicks the Log Out link on the page of some application, which previously used Red Hat build of Keycloak to authenticate the user. Once the user is redirected to the logout endpoint, Red Hat build of Keycloak is going to send logout requests to clients to let them invalidate their local user sessions, and potentially redirect the user to some URL once the logout process is finished. The user might be optionally requested to confirm the logout in case the id_token_hint parameter was not used. After logout, the user is automatically redirected to the specified post_logout_redirect_uri as long as it is provided as a parameter. Note that you need to include either the client_id or id_token_hint parameter in case the post_logout_redirect_uri is included. Also the post_logout_redirect_uri parameter needs to match one of the Valid Post Logout Redirect URIs specified in the client configuration. Depending on the client configuration, logout requests can be sent to clients through the front-channel or through the back-channel. For the frontend browser clients, which rely on the Session Management described in the section, Red Hat build of Keycloak does not need to send any logout requests to them; these clients automatically detect that SSO session in the browser is logged out. 10.1.2.3. Front-channel Logout To configure clients to receive logout requests through the front-channel, look at the Front-Channel Logout client setting. When using this method, consider the following: Logout requests sent by Red Hat build of Keycloak to clients rely on the browser and on embedded iframes that are rendered for the logout page. By being based on iframes , front-channel logout might be impacted by Content Security Policies (CSP) and logout requests might be blocked. If the user closes the browser prior to rendering the logout page or before logout requests are actually sent to clients, their sessions at the client might not be invalidated. Note Consider using Back-Channel Logout as it provides a more reliable and secure approach to log out users and terminate their sessions on the clients. If the client is not enabled with front-channel logout, then Red Hat build of Keycloak is going to try first to send logout requests through the back-channel using the Back-Channel Logout URL . If not defined, the server is going to fall back to using the Admin URL . 10.1.2.4. Backchannel Logout This is a non-browser-based logout that uses direct backchannel communication between Red Hat build of Keycloak and clients. Red Hat build of Keycloak sends a HTTP POST request containing a logout token to all clients logged into Red Hat build of Keycloak. These requests are sent to a registered backchannel logout URLs at Red Hat build of Keycloak and are supposed to trigger a logout at client side. 10.1.3. Red Hat build of Keycloak server OIDC URI endpoints The following is a list of OIDC endpoints that Red Hat build of Keycloak publishes. These endpoints can be used when a non-Red Hat build of Keycloak client adapter uses OIDC to communicate with the authentication server. They are all relative URLs. The root of the URL consists of the HTTP(S) protocol, hostname, and optionally the path: For example /realms/{realm-name}/protocol/openid-connect/auth Used for obtaining a temporary code in the Authorization Code Flow or obtaining tokens using the Implicit Flow, Direct Grants, or Client Grants. /realms/{realm-name}/protocol/openid-connect/token Used by the Authorization Code Flow to convert a temporary code into a token. /realms/{realm-name}/protocol/openid-connect/logout Used for performing logouts. /realms/{realm-name}/protocol/openid-connect/userinfo Used for the User Info service described in the OIDC specification. /realms/{realm-name}/protocol/openid-connect/revoke Used for OAuth 2.0 Token Revocation described in RFC7009 . /realms/{realm-name}/protocol/openid-connect/certs Used for the JSON Web Key Set (JWKS) containing the public keys used to verify any JSON Web Token (jwks_uri) /realms/{realm-name}/protocol/openid-connect/auth/device Used for Device Authorization Grant to obtain a device code and a user code. /realms/{realm-name}/protocol/openid-connect/ext/ciba/auth This is the URL endpoint for Client Initiated Backchannel Authentication Grant to obtain an auth_req_id that identifies the authentication request made by the client. /realms/{realm-name}/protocol/openid-connect/logout/backchannel-logout This is the URL endpoint for performing backchannel logouts described in the OIDC specification. In all of these, replace {realm-name} with the name of the realm. 10.2. SAML SAML 2.0 is a similar specification to OIDC but more mature. It is descended from SOAP and web service messaging specifications so is generally more verbose than OIDC. SAML 2.0 is an authentication protocol that exchanges XML documents between authentication servers and applications. XML signatures and encryption are used to verify requests and responses. In general, SAML implements two use cases. The first use case is an application that requests the Red Hat build of Keycloak server authenticates a user. Upon successful login, the application will receive an XML document. This document contains an SAML assertion that specifies user attributes. The realm digitally signs the document which contains access information (such as user role mappings) that applications use to determine the resources users are allowed to access in the application. The second use case is a client accessing remote services. The client requests a SAML assertion from Red Hat build of Keycloak to invoke on remote services on behalf of the user. 10.2.1. SAML bindings Red Hat build of Keycloak supports three binding types. 10.2.1.1. Redirect binding Redirect binding uses a series of browser redirect URIs to exchange information. A user connects to an application using a browser. The application detects the user is not authenticated. The application generates an XML authentication request document and encodes it as a query parameter in a URI. The URI is used to redirect to the Red Hat build of Keycloak server. Depending on your settings, the application can also digitally sign the XML document and include the signature as a query parameter in the redirect URI to Red Hat build of Keycloak. This signature is used to validate the client that sends the request. The browser redirects to Red Hat build of Keycloak. The server extracts the XML auth request document and verifies the digital signature, if required. The user enters their authentication credentials. After authentication, the server generates an XML authentication response document. The document contains a SAML assertion that holds metadata about the user, including name, address, email, and any role mappings the user has. The document is usually digitally signed using XML signatures, and may also be encrypted. The XML authentication response document is encoded as a query parameter in a redirect URI. The URI brings the browser back to the application. The digital signature is also included as a query parameter. The application receives the redirect URI and extracts the XML document. The application verifies the realm's signature to ensure it is receiving a valid authentication response. The information inside the SAML assertion is used to make access decisions or display user data. 10.2.1.2. POST binding POST binding is similar to Redirect binding but POST binding exchanges XML documents using POST requests instead of using GET requests. POST binding uses JavaScript to make the browser send a POST request to the Red Hat build of Keycloak server or application when exchanging documents. HTTP responds with an HTML document which contains an HTML form containing embedded JavaScript. When the page loads, the JavaScript automatically invokes the form. POST binding is recommended due to two restrictions: Security - With Redirect binding, the SAML response is part of the URL. It is less secure as it is possible to capture the response in logs. Size - Sending the document in the HTTP payload provides more scope for large amounts of data than in a limited URL. 10.2.1.3. ECP Enhanced Client or Proxy (ECP) is a SAML v.2.0 profile which allows the exchange of SAML attributes outside the context of a web browser. It is often used by REST or SOAP-based clients. 10.2.2. Red Hat build of Keycloak Server SAML URI Endpoints Red Hat build of Keycloak has one endpoint for all SAML requests. http(s)://authserver.host/realms/{realm-name}/protocol/saml All bindings use this endpoint. 10.3. OpenID Connect compared to SAML The following lists a number of factors to consider when choosing a protocol. For most purposes, Red Hat build of Keycloak recommends using OIDC. OIDC OIDC is specifically designed to work with the web. OIDC is suited for HTML5/JavaScript applications because it is easier to implement on the client side than SAML. OIDC tokens are in the JSON format which makes them easier for Javascript to consume. OIDC has features to make security implementation easier. For example, see the iframe trick that the specification uses to determine a users login status. SAML SAML is designed as a layer to work on top of the web. SAML can be more verbose than OIDC. Users pick SAML over OIDC because there is a perception that it is mature. Users pick SAML over OIDC existing applications that are secured with it. 10.4. Docker registry v2 authentication Note Docker authentication is disabled by default. To enable docker authentication, see the Enabling and disabling features chapter. Docker Registry V2 Authentication is a protocol, similar to OIDC, that authenticates users against Docker registries. Red Hat build of Keycloak's implementation of this protocol lets Docker clients use a Red Hat build of Keycloak authentication server authenticate against a registry. This protocol uses standard token and signature mechanisms but it does deviate from a true OIDC implementation. It deviates by using a very specific JSON format for requests and responses as well as mapping repository names and permissions to the OAuth scope mechanism. 10.4.1. Docker authentication flow The authentication flow is described in the Docker API documentation . The following is a summary from the perspective of the Red Hat build of Keycloak authentication server: Perform a docker login . The Docker client requests a resource from the Docker registry. If the resource is protected and no authentication token is in the request, the Docker registry server responds with a 401 HTTP message with some information on the permissions that are required and the location of the authorization server. The Docker client constructs an authentication request based on the 401 HTTP message from the Docker registry. The client uses the locally cached credentials (from the docker login command) as part of the HTTP Basic Authentication request to the Red Hat build of Keycloak authentication server. The Red Hat build of Keycloak authentication server attempts to authenticate the user and return a JSON body containing an OAuth-style Bearer token. The Docker client receives a bearer token from the JSON response and uses it in the authorization header to request the protected resource. The Docker registry receives the new request for the protected resource with the token from the Red Hat build of Keycloak server. The registry validates the token and grants access to the requested resource (if appropriate). Note Red Hat build of Keycloak does not create a browser SSO session after successful authentication with the Docker protocol. The browser SSO session does not use the Docker protocol as it cannot refresh tokens or obtain the status of a token or session from the Red Hat build of Keycloak server; therefore a browser SSO session is not necessary. For more details, see the transient session section. 10.4.2. Red Hat build of Keycloak Docker Registry v2 Authentication Server URI Endpoints Red Hat build of Keycloak has one endpoint for all Docker auth v2 requests. http(s)://authserver.host/realms/{realm-name}/protocol/docker-v2/auth | [
"kc.[sh|bat] start --spi-ciba-auth-channel-ciba-http-auth-channel-http-authentication-channel-uri=https://backend.internal.example.com",
"POST [delegation_reception]",
"POST /realms/[realm]/protocol/openid-connect/ext/ciba/auth/callback",
"https://localhost:8080"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/sso_protocols |
Deploying OpenShift Data Foundation using Microsoft Azure | Deploying OpenShift Data Foundation using Microsoft Azure Red Hat OpenShift Data Foundation 4.16 Instructions on deploying OpenShift Data Foundation using Microsoft Azure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Microsoft Azure. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_microsoft_azure/index |
Chapter 1. About Red Hat Trusted Application Pipeline | Chapter 1. About Red Hat Trusted Application Pipeline Sophisticated applications have complex software supply chains, and the longer a software supply chain is, the more vulnerable it is to attacks of all kinds. Secure every phase of your software development lifecycle with Red Hat Trusted Application Pipeline (RHTAP). RHTAP can build, test, deploy, and monitor your source code with secure CI/CD, and its comprehensive set of security tools protects your complete software supply chain. Key RHTAP features Continuously build, test, and deploy container images from your Git source code to a built-in development environment. Ready-to-use templates to start learning and customizing right away. Build Java, Python, Node, Go, or npm-based apps into container images. Access to Red Hat Developer Hub as your self-serve developer portal. Generate, check, and manage your software bill of materials (SBOM). Cryptographically sign and attest container image provenance with Tekton chains. Verify container image SLSA compliance up to level 3, against more than 40 rules. Vulnerabilities scanning with each merge request to identify and address any security threats at the earliest stage possible. Who's the target user? If you're a platform engineer, application developer, or security team member, you're in the right place. In Red Hat Trusted Application Pipeline, you'll find everything you need to install, configure, and customize the internal developer portal to secure your software supply chain across the development lifecycle. How does it work? Red Hat Trusted Application Pipeline (RHTAP) empowers you to streamline and secure your entire DevSecOps CI/CD process. Secure development from the onset Once RHTAP is installed and configured, access pre-built, secure templates within Red Hat Developer Hub. Simply select the appropriate ready-to-use software template, fill in the necessary details, and create a new application. This creates a dedicated development environment that includes everything you need: a code repository (source code and GitOps repositories), technical documentation, and a continuous integration/continuous delivery (CI/CD) pipeline. Security scans throughout the development lifecycle Editing the source code triggers a pipeline run within your application. This pipeline ensures every build artifact is signed and attested for authenticity. It also scans for vulnerabilities in your code and automatically generates Software Bills of Materials (SBOMs). These SBOMs detail all components, libraries, and dependencies included in the container image, providing complete transparency into your application's makeup. Review, Refine, and Release The pipeline presents any identified vulnerabilities for your review and remediation. You can also review the SBOM to gain a deeper understanding of your application's components. Depending on your promotional workflow, you might advance your application through development, staging, and finally to production. Each promotion triggers another pipeline run, scanning for vulnerabilities and enforcing your Enterprise Contract (EC). The EC ensures that container images meet predefined quality and security standards before release. Should an image fail to meet these criteria, the EC issues a detailed report identifying the necessary corrections. This streamlined approach with RHTAP allows developers to focus on innovation while upholding the highest security standards throughout the development lifecycle. To better understand how RHTAP works, take a look at the following descriptive list of the various components and technologies that support and are supported by RHTAP. Table 1.1. RHTAP technologies and components Components and technologies Description Red Hat Developer Hub RHDH gives you access to countless resources and tools for secure software development, so getting started with RHTAP is streamlined and straightforward. RHDH encourages best practices and facilitates the integration of security measures from the very start of your development process. Red Hat Trusted Artifact Signer RHTAS enhances software integrity by making sure every piece of your code and all of your artifacts are signed and attested. RHTAS provides a verifiable trust chain to confirm that all of your software components are safeguarded and authentic. Red Hat Trusted Profile Analyzer RHTPA automates the creation of your software bill of materials (SBOM). SBOMs are critical for maintaining software supply chain transparency and compliance because they provide a detailed list of all components, libraries, and dependencies included in a software product. When you use RHTPA to generate and manage your SBOM, you're making sure that all of your stakeholders have accurate and current information about the composition of your software. OpenShift RHTAP uses an OpenShift Container Platform (OCP) cluster for compute resources. OCP also includes a console, which offers various services to standardize workflows and make it easier to securely manage the entire development lifecycle. GitHub RHTAP automatically starts a build according to the pipeline definition in your pull request (PR). You can also view PR test feedback according to the checks API, and after successful tests, you can set up your PRs to automerge. Argo CD Argo CD from GitOps declares and controls versions of your app definitions, configurations, and environments, and automates and tracks app deployment and lifecycle management. Tekton build pipeline When you build with RHTAP, you store a complete Tekton build pipeline in your repository. Tekton Chains RHTAP can use Tekton Chains to produce a signed build pipeline attestation. Additional resources For more information about getting started with RHTAP, see Getting Started with Red Hat Trusted Application Pipeline . For more information about Red Hat Developer Hub, see Product Documentation for Red Hat Developer Hub 1.1 . For more information about Red Hat Trusted Artifact Signer, see Red Hat Trusted Artifact Signer Deployment guide . For more information about Red Hat Trusted Profile Analyzer, see Product Documentation for Red Hat Trusted Profile Analyzer . For more information about OpenShift, see OpenShift . For more information about Argo CD, see Argo CD . For more information about Tekton build pipelines, see Tekton build pipeline . For more information about Tekton Chains, see Tekton Chains . | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/release_notes_for_red_hat_trusted_application_pipeline_1.0/con_about-rhtap_default |
Chapter 5. Enhancements | Chapter 5. Enhancements Streams for Apache Kafka 2.9 adds a number of enhancements. 5.1. Streams for Apache Kafka 5.1.1. Kafka 3.9.0 enhancements For an overview of the enhancements introduced with Kafka 3.9.0, refer to the Kafka 3.9.0 Release Notes. 5.1.2. Configuration mechanism for quotas management The Strimzi Quotas plugin moves to GA (General Availability). Use the plugin properties to set throughput and storage limits on brokers in your Kafka cluster configuration. Warning If you have previously used the Strimzi Quotas plugin in releases prior to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the latest properties to avoid reconciliation issues when upgrading. For more information, see Setting limits on brokers using the Kafka Static Quota plugin . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_rhel/enhancements-str |
Chapter 11. DeploymentLog [apps.openshift.io/v1] | Chapter 11. DeploymentLog [apps.openshift.io/v1] Description DeploymentLog represents the logs for a deployment Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 11.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/log GET : read log of the specified DeploymentConfig 11.2.1. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/log Table 11.1. Global path parameters Parameter Type Description name string name of the DeploymentLog namespace string object name and auth scope, such as for teams and projects Table 11.2. Global query parameters Parameter Type Description container string The container for which to stream logs. Defaults to only container if there is one container in the pod. follow boolean Follow if true indicates that the build log should be streamed until the build terminates. limitBytes integer If set, the number of bytes to read from the server before terminating the log output. This may not display a complete final line of logging, and may return slightly more or slightly less than the specified limit. nowait boolean NoWait if true causes the call to return immediately even if the deployment is not available yet. Otherwise the server will wait until the deployment has started. pretty string If 'true', then the output is pretty printed. boolean Return deployment logs. Defaults to false. sinceSeconds integer A relative time in seconds before the current time from which to show logs. If this value precedes the time a pod was started, only logs since the pod start will be returned. If this value is in the future, no logs will be returned. Only one of sinceSeconds or sinceTime may be specified. tailLines integer If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime timestamps boolean If true, add an RFC3339 or RFC3339Nano timestamp at the beginning of every line of log output. Defaults to false. version integer Version of the deployment for which to view logs. HTTP method GET Description read log of the specified DeploymentConfig Table 11.3. HTTP responses HTTP code Reponse body 200 - OK DeploymentLog schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/workloads_apis/deploymentlog-apps-openshift-io-v1 |
Chapter 3. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1] | Chapter 3. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1] Description AdminPolicyBasedExternalRoute is a CRD allowing the cluster administrators to configure policies for external gateway IPs to be applied to all the pods contained in selected namespaces. Egress traffic from the pods that belong to the selected namespaces to outside the cluster is routed through these external gateway IPs. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object AdminPolicyBasedExternalRouteSpec defines the desired state of AdminPolicyBasedExternalRoute status object AdminPolicyBasedRouteStatus contains the observed status of the AdminPolicyBased route types. 3.1.1. .spec Description AdminPolicyBasedExternalRouteSpec defines the desired state of AdminPolicyBasedExternalRoute Type object Required from nextHops Property Type Description from object From defines the selectors that will determine the target namespaces to this CR. nextHops object NextHops defines two types of hops: Static and Dynamic. Each hop defines at least one external gateway IP. 3.1.2. .spec.from Description From defines the selectors that will determine the target namespaces to this CR. Type object Required namespaceSelector Property Type Description namespaceSelector object NamespaceSelector defines a selector to be used to determine which namespaces will be targeted by this CR 3.1.3. .spec.from.namespaceSelector Description NamespaceSelector defines a selector to be used to determine which namespaces will be targeted by this CR Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.4. .spec.from.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.5. .spec.from.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.6. .spec.nextHops Description NextHops defines two types of hops: Static and Dynamic. Each hop defines at least one external gateway IP. Type object Property Type Description dynamic array DynamicHops defines a slices of DynamicHop. This field is optional. dynamic[] object DynamicHop defines the configuration for a dynamic external gateway interface. These interfaces are wrapped around a pod object that resides inside the cluster. The field NetworkAttachmentName captures the name of the multus network name to use when retrieving the gateway IP to use. The PodSelector and the NamespaceSelector are mandatory fields. static array StaticHops defines a slice of StaticHop. This field is optional. static[] object StaticHop defines the configuration of a static IP that acts as an external Gateway Interface. IP field is mandatory. 3.1.7. .spec.nextHops.dynamic Description DynamicHops defines a slices of DynamicHop. This field is optional. Type array 3.1.8. .spec.nextHops.dynamic[] Description DynamicHop defines the configuration for a dynamic external gateway interface. These interfaces are wrapped around a pod object that resides inside the cluster. The field NetworkAttachmentName captures the name of the multus network name to use when retrieving the gateway IP to use. The PodSelector and the NamespaceSelector are mandatory fields. Type object Required namespaceSelector podSelector Property Type Description bfdEnabled boolean BFDEnabled determines if the interface implements the Bidirectional Forward Detection protocol. Defaults to false. namespaceSelector object NamespaceSelector defines a selector to filter the namespaces where the pod gateways are located. networkAttachmentName string NetworkAttachmentName determines the multus network name to use when retrieving the pod IPs that will be used as the gateway IP. When this field is empty, the logic assumes that the pod is configured with HostNetwork and is using the node's IP as gateway. podSelector object PodSelector defines the selector to filter the pods that are external gateways. 3.1.9. .spec.nextHops.dynamic[].namespaceSelector Description NamespaceSelector defines a selector to filter the namespaces where the pod gateways are located. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.10. .spec.nextHops.dynamic[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.11. .spec.nextHops.dynamic[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.12. .spec.nextHops.dynamic[].podSelector Description PodSelector defines the selector to filter the pods that are external gateways. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.13. .spec.nextHops.dynamic[].podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.14. .spec.nextHops.dynamic[].podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.15. .spec.nextHops.static Description StaticHops defines a slice of StaticHop. This field is optional. Type array 3.1.16. .spec.nextHops.static[] Description StaticHop defines the configuration of a static IP that acts as an external Gateway Interface. IP field is mandatory. Type object Required ip Property Type Description bfdEnabled boolean BFDEnabled determines if the interface implements the Bidirectional Forward Detection protocol. Defaults to false. ip string IP defines the static IP to be used for egress traffic. The IP can be either IPv4 or IPv6. 3.1.17. .status Description AdminPolicyBasedRouteStatus contains the observed status of the AdminPolicyBased route types. Type object Property Type Description lastTransitionTime string Captures the time when the last change was applied. messages array (string) An array of Human-readable messages indicating details about the status of the object. status string A concise indication of whether the AdminPolicyBasedRoute resource is applied with success 3.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes DELETE : delete collection of AdminPolicyBasedExternalRoute GET : list objects of kind AdminPolicyBasedExternalRoute POST : create an AdminPolicyBasedExternalRoute /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name} DELETE : delete an AdminPolicyBasedExternalRoute GET : read the specified AdminPolicyBasedExternalRoute PATCH : partially update the specified AdminPolicyBasedExternalRoute PUT : replace the specified AdminPolicyBasedExternalRoute /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name}/status GET : read status of the specified AdminPolicyBasedExternalRoute PATCH : partially update status of the specified AdminPolicyBasedExternalRoute PUT : replace status of the specified AdminPolicyBasedExternalRoute 3.2.1. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes HTTP method DELETE Description delete collection of AdminPolicyBasedExternalRoute Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AdminPolicyBasedExternalRoute Table 3.2. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRouteList schema 401 - Unauthorized Empty HTTP method POST Description create an AdminPolicyBasedExternalRoute Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 202 - Accepted AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty 3.2.2. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the AdminPolicyBasedExternalRoute HTTP method DELETE Description delete an AdminPolicyBasedExternalRoute Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AdminPolicyBasedExternalRoute Table 3.9. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AdminPolicyBasedExternalRoute Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AdminPolicyBasedExternalRoute Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty 3.2.3. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the AdminPolicyBasedExternalRoute HTTP method GET Description read status of the specified AdminPolicyBasedExternalRoute Table 3.16. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AdminPolicyBasedExternalRoute Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AdminPolicyBasedExternalRoute Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_apis/adminpolicybasedexternalroute-k8s-ovn-org-v1 |
Chapter 4. Installing a cluster on OpenStack on your own infrastructure | Chapter 4. Installing a cluster on OpenStack on your own infrastructure In OpenShift Container Platform version 4.17, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.17 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 4.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 4.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 4.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 4.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-containers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/update-network-resources.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 4.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.17 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 4.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 4.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 4.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 4.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 4.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 4.12. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure For a dual stack cluster deployment, edit the inventory.yaml file and uncomment the os_subnet6 attribute. To ensure that your network resources have unique names on the RHOSP deployment, create an environment variable and JSON file for use in the Ansible playbooks: Create an environment variable that has a unique name value by running the following command: USD export OS_NET_ID="openshift-USD(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '"%02x"')" Verify that the variable is set by running the following command on a command line: USD echo USDOS_NET_ID Create a JSON object that includes the variable in a file called netid.json by running the following command: USD echo "{\"os_net_id\": \"USDOS_NET_ID\"}" | tee netid.json On a command line, create the network resources by running the following command: USD ansible-playbook -i inventory.yaml network.yaml Note The API and Ingress VIP fields will be overwritten in the inventory.yaml playbook with the IP addresses assigned to the network ports. Note The resources created by the network.yaml playbook are deleted by the down-network.yaml playbook. 4.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. Additional resources Installation configuration parameters for OpenStack 4.13.1. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 4.13.2. Sample customized install-config.yaml file for RHOSP The following example install-config.yaml files demonstrate all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. Example 4.1. Example single stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Example 4.2. Example dual stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 4.13.3. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. You have Python 3 installed. Procedure On a command line, browse to the directory that contains the install-config.yaml and inventory.yaml files. From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run the following command: USD python -c 'import os import sys import yaml import re re_os_net_id = re.compile(r"{{\s*os_net_id\s*}}") os_net_id = os.getenv("OS_NET_ID") path = "common.yaml" facts = None for _dict in yaml.safe_load(open(path))[0]["tasks"]: if "os_network" in _dict.get("set_fact", {}): facts = _dict["set_fact"] break if not facts: print("Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.") sys.exit(1) os_network = re_os_net_id.sub(os_net_id, facts["os_network"]) os_subnet = re_os_net_id.sub(os_net_id, facts["os_subnet"]) path = "install-config.yaml" data = yaml.safe_load(open(path)) inventory = yaml.safe_load(open("inventory.yaml"))["all"]["hosts"]["localhost"] machine_net = [{"cidr": inventory["os_subnet_range"]}] api_vips = [inventory["os_apiVIP"]] ingress_vips = [inventory["os_ingressVIP"]] ctrl_plane_port = {"network": {"name": os_network}, "fixedIPs": [{"subnet": {"name": os_subnet}}]} if inventory.get("os_subnet6_range"): 1 os_subnet6 = re_os_net_id.sub(os_net_id, facts["os_subnet6"]) machine_net.append({"cidr": inventory["os_subnet6_range"]}) api_vips.append(inventory["os_apiVIP6"]) ingress_vips.append(inventory["os_ingressVIP6"]) data["networking"]["networkType"] = "OVNKubernetes" data["networking"]["clusterNetwork"].append({"cidr": inventory["cluster_network6_cidr"], "hostPrefix": inventory["cluster_network6_prefix"]}) data["networking"]["serviceNetwork"].append(inventory["service_subnet6_range"]) ctrl_plane_port["fixedIPs"].append({"subnet": {"name": os_subnet6}}) data["networking"]["machineNetwork"] = machine_net data["platform"]["openstack"]["apiVIPs"] = api_vips data["platform"]["openstack"]["ingressVIPs"] = ingress_vips data["platform"]["openstack"]["controlPlanePort"] = ctrl_plane_port del data["platform"]["openstack"]["externalDNS"] open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Applies to dual stack (IPv4/IPv6) environments. 4.13.4. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 4.13.5. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 4.13.5.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 4.13.5.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 4.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines, compute machine sets, and control plane machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 4.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 4.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 4.17. Updating network resources on RHOSP Update the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible Playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible Playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installation program cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, update the network resources by running the update-network-resources.yaml playbook: USD ansible-playbook -i inventory.yaml update-network-resources.yaml 1 1 This playbook will add tags to the network, subnets, ports, and router. It also attaches floating IP addresses to the API and Ingress ports and sets the security groups for those ports. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optional: You can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 4.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 4.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 4.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files are not already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 4.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 4.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 4.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 4.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 4.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/down-containers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/openstack/update-network-resources.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"export OS_NET_ID=\"openshift-USD(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '\"%02x\"')\"",
"echo USDOS_NET_ID",
"echo \"{\\\"os_net_id\\\": \\\"USDOS_NET_ID\\\"}\" | tee netid.json",
"ansible-playbook -i inventory.yaml network.yaml",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c 'import os import sys import yaml import re re_os_net_id = re.compile(r\"{{\\s*os_net_id\\s*}}\") os_net_id = os.getenv(\"OS_NET_ID\") path = \"common.yaml\" facts = None for _dict in yaml.safe_load(open(path))[0][\"tasks\"]: if \"os_network\" in _dict.get(\"set_fact\", {}): facts = _dict[\"set_fact\"] break if not facts: print(\"Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.\") sys.exit(1) os_network = re_os_net_id.sub(os_net_id, facts[\"os_network\"]) os_subnet = re_os_net_id.sub(os_net_id, facts[\"os_subnet\"]) path = \"install-config.yaml\" data = yaml.safe_load(open(path)) inventory = yaml.safe_load(open(\"inventory.yaml\"))[\"all\"][\"hosts\"][\"localhost\"] machine_net = [{\"cidr\": inventory[\"os_subnet_range\"]}] api_vips = [inventory[\"os_apiVIP\"]] ingress_vips = [inventory[\"os_ingressVIP\"]] ctrl_plane_port = {\"network\": {\"name\": os_network}, \"fixedIPs\": [{\"subnet\": {\"name\": os_subnet}}]} if inventory.get(\"os_subnet6_range\"): 1 os_subnet6 = re_os_net_id.sub(os_net_id, facts[\"os_subnet6\"]) machine_net.append({\"cidr\": inventory[\"os_subnet6_range\"]}) api_vips.append(inventory[\"os_apiVIP6\"]) ingress_vips.append(inventory[\"os_ingressVIP6\"]) data[\"networking\"][\"networkType\"] = \"OVNKubernetes\" data[\"networking\"][\"clusterNetwork\"].append({\"cidr\": inventory[\"cluster_network6_cidr\"], \"hostPrefix\": inventory[\"cluster_network6_prefix\"]}) data[\"networking\"][\"serviceNetwork\"].append(inventory[\"service_subnet6_range\"]) ctrl_plane_port[\"fixedIPs\"].append({\"subnet\": {\"name\": os_subnet6}}) data[\"networking\"][\"machineNetwork\"] = machine_net data[\"platform\"][\"openstack\"][\"apiVIPs\"] = api_vips data[\"platform\"][\"openstack\"][\"ingressVIPs\"] = ingress_vips data[\"platform\"][\"openstack\"][\"controlPlanePort\"] = ctrl_plane_port del data[\"platform\"][\"openstack\"][\"externalDNS\"] open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml update-network-resources.yaml 1",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"openshift-install --log-level debug wait-for install-complete"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_openstack/installing-openstack-user |
Chapter 11. Migrating | Chapter 11. Migrating Warning The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the distributed tracing platform (Tempo) documentation. If you are already using the Red Hat OpenShift distributed tracing platform (Jaeger) for your applications, you can migrate to the Red Hat build of OpenTelemetry, which is based on the OpenTelemetry open-source project. The Red Hat build of OpenTelemetry provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the Red Hat build of OpenTelemetry can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications. Migration from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments. 11.1. Migrating with sidecars The Red Hat build of OpenTelemetry Operator supports sidecar injection into deployment workloads, so you can migrate from a distributed tracing platform (Jaeger) sidecar to a Red Hat build of OpenTelemetry sidecar. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This endpoint points to the Gateway of a TempoStack instance deployed by using the <example> Tempo Operator. Create a service account for running your application. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar Create a cluster role for the permissions needed by some processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The resourcedetectionprocessor requires permissions for infrastructures and infrastructures/status. Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. Remove the injected Jaeger Agent from your application by removing the "sidecar.jaegertracing.io/inject": "true" annotation from your Deployment object. Enable automatic injection of the OpenTelemetry sidecar by adding the sidecar.opentelemetry.io/inject: "true" annotation to the .spec.template.metadata.annotations field of your Deployment object. Use the created service account for the deployment of your application to allow the processors to get the correct information and add it to your traces. 11.2. Migrating without sidecars You can migrate from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry without sidecar deployment. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure OpenTelemetry Collector deployment. Create the project where the OpenTelemetry Collector will be deployed. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account for running the OpenTelemetry Collector instance. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Create a cluster role for setting the required permissions for the processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 Permissions for the pods and namespaces resources are required for the k8sattributesprocessor . 2 Permissions for infrastructures and infrastructures/status are required for resourcedetectionprocessor . Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the OpenTelemetry Collector instance. Note This collector will export traces to a TempoStack instance. You must create your TempoStack instance by using the Red Hat Tempo Operator and place here the correct endpoint. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-example-gateway:8090" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] Point your tracing endpoint to the OpenTelemetry Operator. If you are exporting your traces directly from your application to Jaeger, change the API endpoint from the Jaeger endpoint to the OpenTelemetry Collector endpoint. Example of exporting traces by using the jaegerexporter with Golang exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1 1 The URL points to the OpenTelemetry Collector API endpoint. | [
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-example-gateway:8090\" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/red_hat_build_of_opentelemetry/dist-tracing-otel-migrating |
Chapter 4. Installation configuration parameters for IBM Power | Chapter 4. Installation configuration parameters for IBM Power Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 4.1. Available installation configuration parameters for IBM Power The following tables specify the required, optional, and IBM Power-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_power/installation-config-parameters-ibm-power |
Web console | Web console OpenShift Container Platform 4.9 Getting started with the web console in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>",
"oc edit console.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: \"\" 1 status: consoleURL: \"\" 2",
"oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config data: console-custom-logo.png: <base64-encoded_logo> ... 1",
"oc edit consoles.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console",
"oc get clusteroperator console -o yaml",
"oc get consoles.operator.openshift.io -o yaml",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called \"Launcher\" under \"namespace\" or \"project\" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"oc adm create-login-template > login.html",
"oc adm create-provider-selection-template > providers.html",
"oc adm create-error-template > errors.html",
"oc create secret generic login-template --from-file=login.html -n openshift-config",
"oc create secret generic providers-template --from-file=providers.html -n openshift-config",
"oc create secret generic error-template --from-file=errors.html -n openshift-config",
"oc edit oauths cluster",
"spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template",
"apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs",
"apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce'",
"apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links-for-foo spec: description: | This is an example of download links for foo displayName: example-foo links: - href: 'https://www.example.com/public/foo.tar' text: foo for linux - href: 'https://www.example.com/public/foo.mac.zip' text: foo for mac - href: 'https://www.example.com/public/foo.win.zip' text: foo for windows",
"apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - \"bin/bash\" - \"-c\" - \"for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done\" restartPolicy: Never",
"oc delete devworkspaces.workspace.devfile.io --all-namespaces --selector 'console.openshift.io/terminal=true' --wait",
"oc delete devworkspacetemplates.workspace.devfile.io --all-namespaces --selector 'console.openshift.io/terminal=true' --wait",
"oc delete devworkspaces.workspace.devfile.io --all-namespaces --all --wait",
"oc delete devworkspaceroutings.controller.devfile.io --all-namespaces --all --wait",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceroutings.controller.devfile.io",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaces.workspace.devfile.io",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspacetemplates.workspace.devfile.io",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceoperatorconfigs.controller.devfile.io",
"oc get customresourcedefinitions.apiextensions.k8s.io | grep \"devfile.io\"",
"oc delete deployment/devworkspace-webhook-server -n openshift-operators",
"oc delete mutatingwebhookconfigurations controller.devfile.io",
"oc delete validatingwebhookconfigurations controller.devfile.io",
"oc delete all --selector app.kubernetes.io/part-of=devworkspace-operator,app.kubernetes.io/name=devworkspace-webhook-server -n openshift-operators",
"oc delete serviceaccounts devworkspace-webhook-server -n openshift-operators",
"oc delete configmap devworkspace-controller -n openshift-operators",
"oc delete clusterrole devworkspace-webhook-server",
"oc delete clusterrolebinding devworkspace-webhook-server",
"oc edit consoles.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: managementState: Removed 1",
"oc get -o yaml consolequickstart spring-with-s2i > my-quick-start.yaml",
"oc create -f my-quick-start.yaml",
"oc explain consolequickstarts",
"summary: failed: Try the steps again. success: Your Spring application is running. title: Run the Spring application conclusion: >- Your Spring application is deployed and ready. 1",
"apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' 1",
"apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring 1 durationMinutes: 10",
"apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 1",
"spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 icon: >- 1 data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJMYXllcl8xIiBkYXRhLW5hbWU9IkxheWVyIDEiIHZpZXdCb3g9IjAgMCAxMDI0IDEwMjQiPjxkZWZzPjxzdHlsZT4uY2xzLTF7ZmlsbDojMTUzZDNjO30uY2xzLTJ7ZmlsbDojZDhkYTlkO30uY2xzLTN7ZmlsbDojNThjMGE4O30uY2xzLTR7ZmlsbDojZmZmO30uY2xzLTV7ZmlsbDojM2Q5MTkxO308L3N0eWxlPjwvZGVmcz48dGl0bGU+c25vd2Ryb3BfaWNvbl9yZ2JfZGVmYXVsdDwvdGl0bGU+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMTAxMi42OSw1OTNjLTExLjEyLTM4LjA3LTMxLTczLTU5LjIxLTEwMy44LTkuNS0xMS4zLTIzLjIxLTI4LjI5LTM5LjA2LTQ3Ljk0QzgzMy41MywzNDEsNzQ1LjM3LDIzNC4xOCw2NzQsMTY4Ljk0Yy01LTUuMjYtMTAuMjYtMTAuMzEtMTUuNjUtMTUuMDdhMjQ2LjQ5LDI0Ni40OSwwLDAsMC0zNi41NS0yNi44LDE4Mi41LDE4Mi41LDAsMCwwLTIwLjMtMTEuNzcsMjAxLjUzLDIwMS41MywwLDAsMC00My4xOS0xNUExNTUuMjQsMTU1LjI0LDAsMCwwLDUyOCw5NS4yYy02Ljc2LS42OC0xMS43NC0uODEtMTQuMzktLjgxaDBsLTEuNjIsMC0xLjYyLDBhMTc3LjMsMTc3LjMsMCwwLDAtMzEuNzcsMy4zNSwyMDguMjMsMjA4LjIzLDAsMCwwLTU2LjEyLDE3LjU2LDE4MSwxODEsMCwwLDAtMjAuMjcsMTEuNzUsMjQ3LjQzLDI0Ny40MywwLDAsMC0zNi41NywyNi44MUMzNjAuMjUsMTU4LjYyLDM1NSwxNjMuNjgsMzUwLDE2OWMtNzEuMzUsNjUuMjUtMTU5LjUsMTcyLTI0MC4zOSwyNzIuMjhDOTMuNzMsNDYwLjg4LDgwLDQ3Ny44Nyw3MC41Miw0ODkuMTcsNDIuMzUsNTIwLDIyLjQzLDU1NC45LDExLjMxLDU5MywuNzIsNjI5LjIyLTEuNzMsNjY3LjY5LDQsNzA3LjMxLDE1LDc4Mi40OSw1NS43OCw4NTkuMTIsMTE4LjkzLDkyMy4wOWEyMiwyMiwwLDAsMCwxNS41OSw2LjUyaDEuODNsMS44Ny0uMzJjODEuMDYtMTMuOTEsMTEwLTc5LjU3LDE0My40OC0xNTUuNiwzLjkxLTguODgsNy45NS0xOC4wNSwxMi4yLTI3LjQzcTUuNDIsOC41NCwxMS4zOSwxNi4yM2MzMS44NSw0MC45MSw3NS4xMiw2NC42NywxMzIuMzIsNzIuNjNsMTguOCwyLjYyLDQuOTUtMTguMzNjMTMuMjYtNDkuMDcsMzUuMy05MC44NSw1MC42NC0xMTYuMTksMTUuMzQsMjUuMzQsMzcuMzgsNjcuMTIsNTAuNjQsMTE2LjE5bDUsMTguMzMsMTguOC0yLjYyYzU3LjItOCwxMDAuNDctMzEuNzIsMTMyLjMyLTcyLjYzcTYtNy42OCwxMS4zOS0xNi4yM2M0LjI1LDkuMzgsOC4yOSwxOC41NSwxMi4yLDI3LjQzLDMzLjQ5LDc2LDYyLjQyLDE0MS42OSwxNDMuNDgsMTU1LjZsMS44MS4zMWgxLjg5YTIyLDIyLDAsMCwwLDE1LjU5LTYuNTJjNjMuMTUtNjQsMTAzLjk1LTE0MC42LDExNC44OS0yMTUuNzhDMTAyNS43Myw2NjcuNjksMTAyMy4yOCw2MjkuMjIsMTAxMi42OSw1OTNaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNMzY0LjE1LDE4NS4yM2MxNy44OS0xNi40LDM0LjctMzAuMTUsNDkuNzctNDAuMTFhMjEyLDIxMiwwLDAsMSw2NS45My0yNS43M0ExOTgsMTk4LDAsMCwxLDUxMiwxMTYuMjdhMTk2LjExLDE5Ni4xMSwwLDAsMSwzMiwzLjFjNC41LjkxLDkuMzYsMi4wNiwxNC41MywzLjUyLDYwLjQxLDIwLjQ4LDg0LjkyLDkxLjA1LTQ3LjQ0LDI0OC4wNi0yOC43NSwzNC4xMi0xNDAuNywxOTQuODQtMTg0LjY2LDI2OC40MmE2MzAuODYsNjMwLjg2LDAsMCwwLTMzLjIyLDU4LjMyQzI3Niw2NTUuMzQsMjY1LjQsNTk4LDI2NS40LDUyMC4yOSwyNjUuNCwzNDAuNjEsMzExLjY5LDI0MC43NCwzNjQuMTUsMTg1LjIzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTUyNy41NCwzODQuODNjODQuMDYtOTkuNywxMTYuMDYtMTc3LjI4LDk1LjIyLTIzMC43NCwxMS42Miw4LjY5LDI0LDE5LjIsMzcuMDYsMzEuMTMsNTIuNDgsNTUuNSw5OC43OCwxNTUuMzgsOTguNzgsMzM1LjA3LDAsNzcuNzEtMTAuNiwxMzUuMDUtMjcuNzcsMTc3LjRhNjI4LjczLDYyOC43MywwLDAsMC0zMy4yMy01OC4zMmMtMzktNjUuMjYtMTMxLjQ1LTE5OS0xNzEuOTMtMjUyLjI3QzUyNi4zMywzODYuMjksNTI3LDM4NS41Miw1MjcuNTQsMzg0LjgzWiIvPjxwYXRoIGNsYXNzPSJjbHMtNCIgZD0iTTEzNC41OCw5MDguMDdoLS4wNmEuMzkuMzksMCwwLDEtLjI3LS4xMWMtMTE5LjUyLTEyMS4wNy0xNTUtMjg3LjQtNDcuNTQtNDA0LjU4LDM0LjYzLTQxLjE0LDEyMC0xNTEuNiwyMDIuNzUtMjQyLjE5LTMuMTMsNy02LjEyLDE0LjI1LTguOTIsMjEuNjktMjQuMzQsNjQuNDUtMzYuNjcsMTQ0LjMyLTM2LjY3LDIzNy40MSwwLDU2LjUzLDUuNTgsMTA2LDE2LjU5LDE0Ny4xNEEzMDcuNDksMzA3LjQ5LDAsMCwwLDI4MC45MSw3MjNDMjM3LDgxNi44OCwyMTYuOTMsODkzLjkzLDEzNC41OCw5MDguMDdaIi8+PHBhdGggY2xhc3M9ImNscy01IiBkPSJNNTgzLjQzLDgxMy43OUM1NjAuMTgsNzI3LjcyLDUxMiw2NjQuMTUsNTEyLDY2NC4xNXMtNDguMTcsNjMuNTctNzEuNDMsMTQ5LjY0Yy00OC40NS02Ljc0LTEwMC45MS0yNy41Mi0xMzUuNjYtOTEuMThhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Ny03MS41NGwuMjEtLjMyLjE5LS4zM2MzOC02My42MywxMjYuNC0xOTEuMzcsMTY3LjEyLTI0NS42Niw0MC43MSw1NC4yOCwxMjkuMSwxODIsMTY3LjEyLDI0NS42NmwuMTkuMzMuMjEuMzJhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Nyw3MS41NEM2ODQuMzQsNzg2LjI3LDYzMS44OCw4MDcuMDUsNTgzLjQzLDgxMy43OVoiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik04ODkuNzUsOTA4YS4zOS4zOSwwLDAsMS0uMjcuMTFoLS4wNkM4MDcuMDcsODkzLjkzLDc4Nyw4MTYuODgsNzQzLjA5LDcyM2EzMDcuNDksMzA3LjQ5LDAsMCwwLDIwLjQ1LTU1LjU0YzExLTQxLjExLDE2LjU5LTkwLjYxLDE2LjU5LTE0Ny4xNCwwLTkzLjA4LTEyLjMzLTE3My0zNi42Ni0yMzcuNHEtNC4yMi0xMS4xNi04LjkzLTIxLjdjODIuNzUsOTAuNTksMTY4LjEyLDIwMS4wNSwyMDIuNzUsMjQyLjE5QzEwNDQuNzksNjIwLjU2LDEwMDkuMjcsNzg2Ljg5LDg4OS43NSw5MDhaIi8+PC9zdmc+Cg==",
"introduction: >- 1 **Spring** is a Java framework for building applications based on a distributed microservices architecture. - Spring enables easy packaging and configuration of Spring applications into a self-contained executable application which can be easily deployed as a container to OpenShift. - Spring applications can integrate OpenShift capabilities to provide a natural \"Spring on OpenShift\" developer experience for both existing and net-new Spring applications. For example: - Externalized configuration using Kubernetes ConfigMaps and integration with Spring Cloud Kubernetes - Service discovery using Kubernetes Services - Load balancing with Replication Controllers - Kubernetes health probes and integration with Spring Actuator - Metrics: Prometheus, Grafana, and integration with Spring Cloud Sleuth - Distributed tracing with Istio & Jaeger tracing - Developer tooling through Red Hat OpenShift and Red Hat CodeReady developer tooling to quickly scaffold new Spring projects, gain access to familiar Spring APIs in your favorite IDE, and deploy to Red Hat OpenShift",
"icon: >- data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHJvbGU9ImltZyIgdmlld.",
"accessReviewResources: - group: helm.openshift.io resource: helmchartrepositories verb: create",
"accessReviewResources: - group: operators.coreos.com resource: operatorgroups verb: list - group: packages.operators.coreos.com resource: packagemanifests verb: list",
"nextQuickStart: - add-healthchecks",
"[Perspective switcher]{{highlight qs-perspective-switcher}}",
"[Home]{{highlight qs-nav-home}} [Operators]{{highlight qs-nav-operators}} [Workloads]{{highlight qs-nav-workloads}} [Serverless]{{highlight qs-nav-serverless}} [Networking]{{highlight qs-nav-networking}} [Storage]{{highlight qs-nav-storage}} [Service catalog]{{highlight qs-nav-servicecatalog}} [Compute]{{highlight qs-nav-compute}} [User management]{{highlight qs-nav-usermanagement}} [Administration]{{highlight qs-nav-administration}}",
"[Add]{{highlight qs-nav-add}} [Topology]{{highlight qs-nav-topology}} [Search]{{highlight qs-nav-search}} [Project]{{highlight qs-nav-project}} [Helm]{{highlight qs-nav-helm}}",
"[Builds]{{highlight qs-nav-builds}} [Pipelines]{{highlight qs-nav-pipelines}} [Monitoring]{{highlight qs-nav-monitoring}}",
"[CloudShell]{{highlight qs-masthead-cloudshell}} [Utility Menu]{{highlight qs-masthead-utilitymenu}} [User Menu]{{highlight qs-masthead-usermenu}} [Applications]{{highlight qs-masthead-applications}} [Import]{{highlight qs-masthead-import}} [Help]{{highlight qs-masthead-help}} [Notifications]{{highlight qs-masthead-notifications}}",
"`code block`{{copy}} `code block`{{execute}}",
"``` multi line code block ```{{copy}} ``` multi line code block ```{{execute}}",
"Create a serverless application.",
"In this quick start, you will deploy a sample application to {product-title}.",
"This quick start shows you how to deploy a sample application to {product-title}.",
"Tasks to complete: Create a serverless application; Connect an event source; Force a new revision",
"You will complete these 3 tasks: Creating a serverless application; Connecting an event source; Forcing a new revision",
"Click OK.",
"Click on the OK button.",
"Enter the Developer perspective: In the main navigation, click the dropdown menu and select Developer. Enter the Administrator perspective: In the main navigation, click the dropdown menu and select Admin.",
"In the node.js deployment, hover over the icon.",
"Hover over the icon in the node.js deployment.",
"Change the time range of the dashboard by clicking the dropdown menu and selecting time range.",
"To look at data in a specific time frame, you can change the time range of the dashboard.",
"In the navigation menu, click Settings.",
"In the left-hand menu, click Settings.",
"The success message indicates a connection.",
"The message with a green icon indicates a connection.",
"Set up your environment.",
"Let's set up our environment."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/web_console/index |
Chapter 26. Configuring ingress cluster traffic | Chapter 26. Configuring ingress cluster traffic 26.1. Configuring ingress cluster traffic overview OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster. The methods are recommended, in order or preference: If you have HTTP/HTTPS, use an Ingress Controller. If you have a TLS-encrypted protocol other than HTTPS. For example, for TLS with the SNI header, use an Ingress Controller. Otherwise, use a Load Balancer, an External IP, or a NodePort . Method Purpose Use an Ingress Controller Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header). Automatically assign an external IP using a load balancer service Allows traffic to non-standard ports through an IP address assigned from a pool. Most cloud platforms offer a method to start a service with a load-balancer IP address. About MetalLB and the MetalLB Operator Allows traffic to a specific IP address or address from a pool on the machine network. For bare-metal installations or platforms that are like bare metal, MetalLB provides a way to start a service with a load-balancer IP address. Manually assign an external IP to a service Allows traffic to non-standard ports through a specific IP address. Configure a NodePort Expose a service on all nodes in the cluster. 26.1.1. Comparision: Fault tolerant access to external IP addresses For the communication methods that provide access to an external IP address, fault tolerant access to the IP address is another consideration. The following features provide fault tolerant access to an external IP address. IP failover IP failover manages a pool of virtual IP address for a set of nodes. It is implemented with Keepalived and Virtual Router Redundancy Protocol (VRRP). IP failover is a layer 2 mechanism only and relies on multicast. Multicast can have disadvantages for some networks. MetalLB MetalLB has a layer 2 mode, but it does not use multicast. Layer 2 mode has a disadvantage that it transfers all traffic for an external IP address through one node. Manually assigning external IP addresses You can configure your cluster with an IP address block that is used to assign external IP addresses to services. By default, this feature is disabled. This feature is flexible, but places the largest burden on the cluster or network administrator. The cluster is prepared to receive traffic that is destined for the external IP, but each customer has to decide how they want to route traffic to nodes. 26.2. Configuring ExternalIPs for services As a cluster administrator, you can designate an IP address block that is external to the cluster that can send traffic to services in the cluster. This functionality is generally most useful for clusters installed on bare-metal hardware. 26.2.1. Prerequisites Your network infrastructure must route traffic for the external IP addresses to your cluster. 26.2.2. About ExternalIP For non-cloud environments, OpenShift Container Platform supports the use of the ExternalIP facility to specify external IP addresses in the spec.externalIPs[] parameter of the Service object. A service configured with an ExternalIP functions similarly to a service with type=NodePort , whereby you traffic directs to a local node for load balancing. Important For cloud environments, use the load balancer services for automatic deployment of a cloud load balancer to target the endpoints of a service. After you specify a value for the parameter, OpenShift Container Platform assigns an additional virtual IP address to the service. The IP address can exist outside of the service network that you defined for your cluster. Warning Because ExternalIP is disabled by default, enabling the ExternalIP functionality might introduce security risks for the service, because in-cluster traffic to an external IP address is directed to that service. This configuration means that cluster users could intercept sensitive traffic destined for external resources. You can use either a MetalLB implementation or an IP failover deployment to attach an ExternalIP resource to a service in the following ways: Automatic assignment of an external IP OpenShift Container Platform automatically assigns an IP address from the autoAssignCIDRs CIDR block to the spec.externalIPs[] array when you create a Service object with spec.type=LoadBalancer set. For this configuration, OpenShift Container Platform implements a cloud version of the load balancer service type and assigns IP addresses to the services. Automatic assignment is disabled by default and must be configured by a cluster administrator as described in the "Configuration for ExternalIP" section. Manual assignment of an external IP OpenShift Container Platform uses the IP addresses assigned to the spec.externalIPs[] array when you create a Service object. You cannot specify an IP address that is already in use by another service. After using either the MetalLB implementation or an IP failover deployment to host external IP address blocks, you must configure your networking infrastructure to ensure that the external IP address blocks are routed to your cluster. This configuration means that the IP address is not configured in the network interfaces from nodes. To handle the traffic, you must configure the routing and access to the external IP by using a method, such as static Address Resolution Protocol (ARP) entries. OpenShift Container Platform extends the ExternalIP functionality in Kubernetes by adding the following capabilities: Restrictions on the use of external IP addresses by users through a configurable policy Allocation of an external IP address automatically to a service upon request 26.2.3. Additional resources Configuring IP failover About MetalLB and the MetalLB Operator 26.2.4. Configuration for ExternalIP Use of an external IP address in OpenShift Container Platform is governed by the following parameters in the Network.config.openshift.io custom resource (CR) that is named cluster : spec.externalIP.autoAssignCIDRs defines an IP address block used by the load balancer when choosing an external IP address for the service. OpenShift Container Platform supports only a single IP address block for automatic assignment. This configuration requires less steps than manually assigning ExternalIPs to services, which requires managing the port space of a limited number of shared IP addresses. If you enable automatic assignment, a Service object with spec.type=LoadBalancer is allocated an external IP address. spec.externalIP.policy defines the permissible IP address blocks when manually specifying an IP address. OpenShift Container Platform does not apply policy rules to IP address blocks that you defined in the spec.externalIP.autoAssignCIDRs parameter. If routed correctly, external traffic from the configured external IP address block can reach service endpoints through any TCP or UDP port that the service exposes. Important As a cluster administrator, you must configure routing to externalIPs. You must also ensure that the IP address block you assign terminates at one or more nodes in your cluster. For more information, see Kubernetes External IPs . OpenShift Container Platform supports both the automatic and manual assignment of IP addresses, where each address is guaranteed to be assigned to a maximum of one service. This configuration ensures that each service can expose its chosen ports regardless of the ports exposed by other services. Note To use IP address blocks defined by autoAssignCIDRs in OpenShift Container Platform, you must configure the necessary IP address assignment and routing for your host network. The following YAML describes a service with an external IP address configured: Example Service object with spec.externalIPs[] set apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253 # ... 26.2.5. Restrictions on the assignment of an external IP address As a cluster administrator, you can specify IP address blocks to allow and to reject IP addresses for a service. Restrictions apply only to users without cluster-admin privileges. A cluster administrator can always set the service spec.externalIPs[] field to any IP address. You configure an IP address policy by specifying Classless Inter-Domain Routing (CIDR) address blocks for the spec.ExternalIP.policy parameter in the policy object. Example in JSON form of a policy object and its CIDR parameters { "policy": { "allowedCIDRs": [], "rejectedCIDRs": [] } } When configuring policy restrictions, the following rules apply: If policy is set to {} , creating a Service object with spec.ExternalIPs[] results in a failed service. This setting is the default for OpenShift Container Platform. The same behavior exists for policy: null . If policy is set and either policy.allowedCIDRs[] or policy.rejectedCIDRs[] is set, the following rules apply: If allowedCIDRs[] and rejectedCIDRs[] are both set, rejectedCIDRs[] has precedence over allowedCIDRs[] . If allowedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] succeeds only if the specified IP addresses are allowed. If rejectedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] succeeds only if the specified IP addresses are not rejected. 26.2.6. Example policy objects The examples in this section show different spec.externalIP.policy configurations. In the following example, the policy prevents OpenShift Container Platform from creating any service with a specified external IP address. Example policy to reject any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {} # ... In the following example, both the allowedCIDRs and rejectedCIDRs fields are set. Example policy that includes both allowed and rejected CIDR blocks apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24 # ... In the following example, policy is set to {} . With this configuration, using the oc get networks.config.openshift.io -o yaml command to view the configuration means policy parameter does not show on the command output. The same behavior exists for policy: null . Example policy to allow any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 externalIP: policy: {} # ... 26.2.7. ExternalIP address block configuration The configuration for ExternalIP address blocks is defined by a Network custom resource (CR) named cluster . The Network CR is part of the config.openshift.io API group. Important During cluster installation, the Cluster Version Operator (CVO) automatically creates a Network CR named cluster . Creating any other CR objects of this type is not supported. The following YAML describes the ExternalIP configuration: Network.config.openshift.io CR named cluster apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2 ... 1 Defines the IP address block in CIDR format that is available for automatic assignment of external IP addresses to a service. Only a single IP address range is allowed. 2 Defines restrictions on manual assignment of an IP address to a service. If no restrictions are defined, specifying the spec.externalIP field in a Service object is not allowed. By default, no restrictions are defined. The following YAML describes the fields for the policy stanza: Network.config.openshift.io policy stanza policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2 1 A list of allowed IP address ranges in CIDR format. 2 A list of rejected IP address ranges in CIDR format. Example external IP configurations Several possible configurations for external IP address pools are displayed in the following examples: The following YAML describes a configuration that enables automatically assigned external IP addresses: Example configuration with spec.externalIP.autoAssignCIDRs set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: autoAssignCIDRs: - 192.168.132.254/29 The following YAML configures policy rules for the allowed and rejected CIDR ranges: Example configuration with spec.externalIP.policy set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32 26.2.8. Configure external IP address blocks for your cluster As a cluster administrator, you can configure the following ExternalIP settings: An ExternalIP address block used by OpenShift Container Platform to automatically populate the spec.clusterIP field for a Service object. A policy object to restrict what IP addresses may be manually assigned to the spec.clusterIP array of a Service object. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Optional: To display the current external IP configuration, enter the following command: USD oc describe networks.config cluster To edit the configuration, enter the following command: USD oc edit networks.config cluster Modify the ExternalIP configuration, as in the following example: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: 1 ... 1 Specify the configuration for the externalIP stanza. To confirm the updated ExternalIP configuration, enter the following command: USD oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{"\n"}}' 26.2.9. steps Configuring ingress cluster traffic for a service external IP 26.3. Configuring ingress cluster traffic using an Ingress Controller OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses an Ingress Controller. 26.3.1. Using Ingress Controllers and routes The Ingress Operator manages Ingress Controllers and wildcard DNS. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster. An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI. Work with your administrator to configure an Ingress Controller to accept external requests and proxy them based on the configured routes. The administrator can create a wildcard DNS entry and then set up an Ingress Controller. Then, you can work with the edge Ingress Controller without having to contact the administrators. By default, every Ingress Controller in the cluster can admit any route created in any project in the cluster. The Ingress Controller: Has two replicas by default, which means it should be running on two worker nodes. Can be scaled up to have more replicas on more nodes. Note The procedures in this section require prerequisites performed by the cluster administrator. 26.3.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: USD oc adm policy add-cluster-role-to-user cluster-admin username You have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 26.3.3. Creating a project and service If the project and service that you want to expose does not exist, create the project and then create the service. If the project and service already exists, skip to the procedure on exposing the service to create a route. Prerequisites Install the OpenShift CLI ( oc ) and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project <project_name> Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n <project_name> Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s Note By default, the new service does not have an external IP address. 26.3.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Prerequisites You logged into OpenShift Container Platform. Procedure Log in to the project where the service you want to expose is located: USD oc project <project_name> Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as curl to check that the service is accessible from outside the cluster. To find the hostname of the route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None To check that the host responds to a GET request, enter the following command: Example curl command USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 26.3.5. Ingress sharding in OpenShift Container Platform In OpenShift Container Platform, an Ingress Controller can serve all routes, or it can serve a subset of routes. By default, the Ingress Controller serves any route created in any namespace in the cluster. You can add additional Ingress Controllers to your cluster to optimize routing by creating shards , which are subsets of routes based on selected characteristics. To mark a route as a member of a shard, use labels in the route or namespace metadata field. The Ingress Controller uses selectors , also known as a selection expression , to select a subset of routes from the entire pool of routes to serve. Ingress sharding is useful in cases where you want to load balance incoming traffic across multiple Ingress Controllers, when you want to isolate traffic to be routed to a specific Ingress Controller, or for a variety of other reasons described in the section. By default, each route uses the default domain of the cluster. However, routes can be configured to use the domain of the router instead. 26.3.6. Ingress Controller sharding You can use Ingress sharding, also known as router sharding, to distribute a set of routes across multiple routers by adding labels to routes, namespaces, or both. The Ingress Controller uses a corresponding set of selectors to admit only the routes that have a specified label. Each Ingress shard comprises the routes that are filtered by using a given selection expression. As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller can be significant. As a cluster administrator, you can shard the routes to: Balance Ingress Controllers, or routers, with several routes to accelerate responses to changes. Assign certain routes to have different reliability guarantees than other routes. Allow certain Ingress Controllers to have different policies defined. Allow only specific routes to use additional features. Expose different routes on different addresses so that internal and external users can see different routes, for example. Transfer traffic from one version of an application to another during a blue-green deployment. When Ingress Controllers are sharded, a given route is admitted to zero or more Ingress Controllers in the group. The status of a route describes whether an Ingress Controller has admitted the route. An Ingress Controller only admits a route if the route is unique to a shard. With sharding, you can distribute subsets of routes over multiple Ingress Controllers. These subsets can be nonoverlapping, also called traditional sharding, or overlapping, otherwise known as overlapped sharding. The following table outlines three sharding methods: Sharding method Description Namespace selector After you add a namespace selector to the Ingress Controller, all routes in a namespace that have matching labels for the namespace selector are included in the Ingress shard. Consider this method when an Ingress Controller serves all routes created in a namespace. Route selector After you add a route selector to the Ingress Controller, all routes with labels that match the route selector are included in the Ingress shard. Consider this method when you want an Ingress Controller to serve only a subset of routes or a specific route in a namespace. Namespace and route selectors Provides your Ingress Controller scope for both namespace selector and route selector methods. Consider this method when you want the flexibility of both the namespace selector and the route selector methods. 26.3.6.1. Traditional sharding example An example of a configured Ingress Controller finops-router that has the label selector spec.namespaceSelector.matchExpressions with key values set to finance and ops : Example YAML definition for finops-router apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: finops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - finance - ops An example of a configured Ingress Controller dev-router that has the label selector spec.namespaceSelector.matchLabels.name with the key value set to dev : Example YAML definition for dev-router apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: dev-router namespace: openshift-ingress-operator spec: namespaceSelector: matchLabels: name: dev If all application routes are in separate namespaces, such as each labeled with name:finance , name:ops , and name:dev , the configuration effectively distributes your routes between the two Ingress Controllers. OpenShift Container Platform routes for console, authentication, and other purposes should not be handled. In the scenario, sharding becomes a special case of partitioning, with no overlapping subsets. Routes are divided between router shards. Warning The default Ingress Controller continues to serve all routes unless the namespaceSelector or routeSelector fields contain routes that are meant for exclusion. See this Red Hat Knowledgebase solution and the section "Sharding the default Ingress Controller" for more information on how to exclude routes from the default Ingress Controller. 26.3.6.2. Overlapped sharding example An example of a configured Ingress Controller devops-router that has the label selector spec.namespaceSelector.matchExpressions with key values set to dev and ops : Example YAML definition for devops-router apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: devops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - dev - ops The routes in the namespaces labeled name:dev and name:ops are now serviced by two different Ingress Controllers. With this configuration, you have overlapping subsets of routes. With overlapping subsets of routes you can create more complex routing rules. For example, you can divert higher priority traffic to the dedicated finops-router while sending lower priority traffic to devops-router . 26.3.6.3. Sharding the default Ingress Controller After creating a new Ingress shard, there might be routes that are admitted to your new Ingress shard that are also admitted by the default Ingress Controller. This is because the default Ingress Controller has no selectors and admits all routes by default. You can restrict an Ingress Controller from servicing routes with specific labels using either namespace selectors or route selectors. The following procedure restricts the default Ingress Controller from servicing your newly sharded finance , ops , and dev , routes using a namespace selector. This adds further isolation to Ingress shards. Important You must keep all of OpenShift Container Platform's administration routes on the same Ingress Controller. Therefore, avoid adding additional selectors to the default Ingress Controller that exclude these essential routes. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. Procedure Modify the default Ingress Controller by running the following command: USD oc edit ingresscontroller -n openshift-ingress-operator default Edit the Ingress Controller to contain a namespaceSelector that excludes the routes with any of the finance , ops , and dev labels: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: NotIn values: - finance - ops - dev The default Ingress Controller will no longer serve the namespaces labeled name:finance , name:ops , and name:dev . 26.3.6.4. Ingress sharding and DNS The cluster administrator is responsible for making a separate DNS entry for each router in a project. A router will not forward unknown routes to another router. Consider the following example: Router A lives on host 192.168.0.5 and has routes with *.foo.com . Router B lives on host 192.168.1.9 and has routes with *.example.com . Separate DNS entries must resolve *.foo.com to the node hosting Router A and *.example.com to the node hosting Router B: *.foo.com A IN 192.168.0.5 *.example.com A IN 192.168.1.9 26.3.6.5. Configuring Ingress Controller sharding by using route labels Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector. Figure 26.1. Ingress sharding using route labels Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" routeSelector: matchLabels: type: sharded 1 Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that have the label type: sharded . Create a new route using the domain configured in the router-internal.yaml : USD oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net 26.3.6.6. Configuring Ingress Controller sharding by using namespace labels Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector. Figure 26.2. Ingress sharding using namespace labels Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: USD cat router-internal.yaml Example output apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" namespaceSelector: matchLabels: type: sharded 1 Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. Apply the Ingress Controller router-internal.yaml file: USD oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded . Create a new route using the domain configured in the router-internal.yaml : USD oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net 26.3.6.7. Creating a route for Ingress Controller sharding A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs. The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port. You have configured the Ingress Controller for sharding. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create a route definition called hello-openshift-route.yaml : YAML definition of the created route for sharding: apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift 1 Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded . 2 The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command: USD oc -n hello-openshift create -f hello-openshift-route.yaml Verification Get the status of the route with the following command: USD oc -n hello-openshift get routes/hello-openshift-edge -o yaml The resulting Route resource should look similar to the following: Example output apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3 1 The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net> . 2 The hostname of the Ingress Controller. 3 The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded . Additional resources Baseline Ingress Controller (router) performance Ingress Operator in OpenShift Container Platform . Installing a cluster on bare metal . Installing a cluster on vSphere . About network policy 26.4. Configuring the Ingress Controller endpoint publishing strategy The endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems. Important On Red Hat OpenStack Platform (RHOSP), the LoadBalancerService endpoint publishing strategy is supported only if a cloud provider is configured to create health monitors. For RHOSP 16.2, this strategy is possible only if you use the Amphora Octavia provider. For more information, see the "Setting RHOSP Cloud Controller Manager options" section of the RHOSP installation documentation. 26.4.1. Ingress Controller endpoint publishing strategy NodePortService endpoint publishing strategy The NodePortService endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service. In this configuration, the Ingress Controller deployment uses container networking. A NodePortService is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift Container Platform; however, to support static port allocations, your changes to the node port field of the managed NodePortService are preserved. Figure 26.3. Diagram of NodePortService The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress NodePort endpoint publishing strategy: All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique NodePort for all the nodes. When the client connects to a node that is down, for example, by connecting the 10.0.128.4 IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, the 10.0.128.4 address is down and another IP address must be used instead. Note The Ingress Operator ignores any updates to .spec.ports[].nodePort fields of the service. By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly. For more information, see the Kubernetes Services documentation on NodePort . HostNetwork endpoint publishing strategy The HostNetwork endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed. An Ingress Controller with the HostNetwork endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80 and 443 on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports. The HostNetwork object has a hostNetwork field with the following default values for the optional binding ports: httpPort: 80 , httpsPort: 443 , and statsPort: 1936 . By specifying different binding ports for your network, you can deploy multiple Ingress Controllers on the same node for the HostNetwork strategy. Example apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936 26.4.1.1. Configuring the Ingress Controller endpoint publishing scope to Internal When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . Cluster administrators can change an External scoped Ingress Controller to Internal . Prerequisites You installed the oc CLI. Procedure To change an External scoped Ingress Controller to Internal , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as Internal . 26.4.1.2. Configuring the Ingress Controller endpoint publishing scope to External When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . The Ingress Controller's scope can be configured to be Internal during installation or after, and cluster administrators can change an Internal Ingress Controller to External . Important On some platforms, it is necessary to delete and recreate the service. Changing the scope can cause disruption to Ingress traffic, potentially for several minutes. This applies to platforms where it is necessary to delete and recreate the service, because the procedure can cause OpenShift Container Platform to deprovision the existing service load balancer, provision a new one, and update DNS. Prerequisites You installed the oc CLI. Procedure To change an Internal scoped Ingress Controller to External , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"External"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as External . 26.4.1.3. Adding a single NodePort service to an Ingress Controller Instead of creating a NodePort -type Service for each project, you can create a custom Ingress Controller to use the NodePortService endpoint publishing strategy. To prevent port conflicts, consider this configuration for your Ingress Controller when you want to apply a set of routes, through Ingress sharding, to nodes that might already have a HostNetwork Ingress Controller. Before you set a NodePort -type Service for each project, read the following considerations: You must create a wildcard DNS record for the Nodeport Ingress Controller domain. A Nodeport Ingress Controller route can be reached from the address of a worker node. For more information about the required DNS records for routes, see "User-provisioned DNS requirements". You must expose a route for your service and specify the --hostname argument for your custom Ingress Controller domain. You must append the port that is assigned to the NodePort -type Service in the route so that you can access application pods. Prerequisites You installed the OpenShift CLI ( oc ). Logged in as a user with cluster-admin privileges. You created a wildcard DNS record. Procedure Create a custom resource (CR) file for the Ingress Controller: Example of a CR file that defines information for the IngressController object apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_ic_name> 1 namespace: openshift-ingress-operator spec: replicas: 1 domain: <custom_ic_domain_name> 2 nodePlacement: nodeSelector: matchLabels: <key>: <value> 3 namespaceSelector: matchLabels: <key>: <value> 4 endpointPublishingStrategy: type: NodePortService # ... 1 Specify the a custom name for the IngressController CR. 2 The DNS name that the Ingress Controller services. As an example, the default ingresscontroller domain is apps.ipi-cluster.example.com , so you would specify the <custom_ic_domain_name> as nodeportsvc.ipi-cluster.example.com . 3 Specify the label for the nodes that include the custom Ingress Controller. 4 Specify the label for a set of namespaces. Substitute <key>:<value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. For example: ingresscontroller: custom-ic . Add a label to a node by using the oc label node command: USD oc label node <node_name> <key>=<value> 1 1 Where <value> must match the key-value pair specified in the nodePlacement section of your IngressController CR. Create the IngressController object: USD oc create -f <ingress_controller_cr>.yaml Find the port for the service created for the IngressController CR: USD oc get svc -n openshift-ingress Example output that shows port 80:32432/TCP for the router-nodeport-custom-ic3 service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m To create a new project, enter the following command: USD oc new-project <project_name> To label the new namespace, enter the following command: USD oc label namespace <project_name> <key>=<value> 1 1 Where <key>=<value> must match the value in the namespaceSelector section of your Ingress Controller CR. Create a new application in your cluster: USD oc new-app --image=<image_name> 1 1 An example of <image_name> is quay.io/openshifttest/hello-openshift:multiarch . Create a Route object for a service, so that the pod can use the service to expose the application external to the cluster. USD oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> 1 Note You must specify the domain name of your custom Ingress Controller in the --hostname argument. If you do not do this, the Ingress Operator uses the default Ingress Controller to serve all the routes for your cluster. Check that the route has the Admitted status and that it includes metadata for the custom Ingress Controller: USD oc get route/hello-openshift -o json | jq '.status.ingress' Example output # ... { "conditions": [ { "lastTransitionTime": "2024-05-17T18:25:41Z", "status": "True", "type": "Admitted" } ], [ { "host": "hello-openshift.nodeportsvc.ipi-cluster.example.com", "routerCanonicalHostname": "router-nodeportsvc.nodeportsvc.ipi-cluster.example.com", "routerName": "nodeportsvc", "wildcardPolicy": "None" } ], } Update the default IngressController CR to prevent the default Ingress Controller from managing the NodePort -type Service . The default Ingress Controller will continue to monitor all other cluster traffic. USD oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"namespaceSelector":{"matchExpressions":[{"key":"<key>","operator":"NotIn","values":["<value>]}]}}}' Verification Verify that the DNS entry can route inside and outside of your cluster by entering the following command. The command outputs the IP address of the node that received the label from running the oc label node command earlier in the procedure. USD dig +short <svc_name>-<project_name>.<custom_ic_domain_name> To verify that your cluster uses the IP addresses from external DNS servers for DNS resolution, check the connection of your cluster by entering the following command: USD curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> 1 1 1 Where <port> is the node port from the NodePort -type Service . Based on example output from the oc get svc -n openshift-ingress command, the 80:32432/TCP HTTP route means that 32432 is the node port. Output example Hello OpenShift! 26.4.2. Additional resources Ingress Controller configuration parameters Setting RHOSP Cloud Controller Manager options User-provisioned DNS requirements 26.5. Configuring ingress cluster traffic using a load balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a load balancer. 26.5.1. Using a load balancer to get traffic into the cluster If you do not need a specific external IP address, you can configure a load balancer service to allow external access to an OpenShift Container Platform cluster. A load balancer service allocates a unique IP. The load balancer has a single edge router IP, which can be a virtual IP (VIP), but is still a single machine for initial load balancing. Note If a pool is configured, it is done at the infrastructure level, not by a cluster administrator. Note The procedures in this section require prerequisites performed by the cluster administrator. 26.5.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: USD oc adm policy add-cluster-role-to-user cluster-admin username Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 26.5.3. Creating a project and service If the project and service that you want to expose does not exist, create the project and then create the service. If the project and service already exists, skip to the procedure on exposing the service to create a route. Prerequisites Install the OpenShift CLI ( oc ) and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project <project_name> Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n <project_name> Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s Note By default, the new service does not have an external IP address. 26.5.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Prerequisites You logged into OpenShift Container Platform. Procedure Log in to the project where the service you want to expose is located: USD oc project <project_name> Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as curl to check that the service is accessible from outside the cluster. To find the hostname of the route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None To check that the host responds to a GET request, enter the following command: Example curl command USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 26.5.5. Creating a load balancer service Use the following procedure to create a load balancer service. Prerequisites Make sure that the project and service you want to expose exist. Your cloud provider supports load balancers. Procedure To create a load balancer service: Log in to OpenShift Container Platform. Load the project where the service you want to expose is located. USD oc project project1 Open a text file on the control plane node and paste the following text, editing the file as needed: Sample load balancer configuration file 1 Enter a descriptive name for the load balancer service. 2 Enter the same port that the service you want to expose is listening on. 3 Enter a list of specific IP addresses to restrict traffic through the load balancer. This field is ignored if the cloud-provider does not support the feature. 4 Enter Loadbalancer as the type. 5 Enter the name of the service. Note To restrict the traffic through the load balancer to specific IP addresses, it is recommended to use the Ingress Controller field spec.endpointPublishingStrategy.loadBalancer.allowedSourceRanges . Do not set the loadBalancerSourceRanges field. Save and exit the file. Run the following command to create the service: USD oc create -f <file-name> For example: USD oc create -f mysql-lb.yaml Execute the following command to view the new service: USD oc get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m The service has an external IP address automatically assigned if there is a cloud provider enabled. On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address: USD curl <public-ip>:<port> For example: USD curl 172.29.121.74:3306 The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the Got packets out of order message, you are connecting with the service: If you have a MySQL client, log in with the standard CLI command: USD mysql -h 172.30.131.89 -u admin -p Example output Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]> 26.6. Configuring ingress cluster traffic on AWS OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses load balancers on AWS, specifically a Network Load Balancer (NLB) or a Classic Load Balancer (CLB). Both types of load balancers can forward the client's IP address to the node, but a CLB requires proxy protocol support, which OpenShift Container Platform automatically enables. There are two ways to configure an Ingress Controller to use an NLB: By force replacing the Ingress Controller that is currently using a CLB. This deletes the IngressController object and an outage will occur while the new DNS records propagate and the NLB is being provisioned. By editing an existing Ingress Controller that uses a CLB to use an NLB. This changes the load balancer without having to delete and recreate the IngressController object. Both methods can be used to switch from an NLB to a CLB. You can configure these load balancers on a new or existing AWS cluster. 26.6.1. Configuring Classic Load Balancer timeouts on AWS OpenShift Container Platform provides a method for setting a custom timeout period for a specific route or Ingress Controller. Additionally, an AWS Classic Load Balancer (CLB) has its own timeout period with a default time of 60 seconds. If the timeout period of the CLB is shorter than the route timeout or Ingress Controller timeout, the load balancer can prematurely terminate the connection. You can prevent this problem by increasing both the timeout period of the route and CLB. 26.6.1.1. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 26.6.1.2. Configuring Classic Load Balancer timeouts You can configure the default timeouts for a Classic Load Balancer (CLB) to extend idle connections. Prerequisites You must have a deployed Ingress Controller on a running cluster. Procedure Set an AWS connection idle timeout of five minutes for the default ingresscontroller by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"type":"LoadBalancerService", "loadBalancer": \ {"scope":"External", "providerParameters":{"type":"AWS", "aws": \ {"type":"Classic", "classicLoadBalancer": \ {"connectionIdleTimeout":"5m"}}}}}}}' Optional: Restore the default value of the timeout by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"providerParameters":{"aws":{"classicLoadBalancer": \ {"connectionIdleTimeout":null}}}}}}}' Note You must specify the scope field when you change the connection timeout value unless the current scope is already set. When you set the scope field, you do not need to do so again if you restore the default timeout value. 26.6.2. Configuring ingress cluster traffic on AWS using a Network Load Balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services that run in the cluster. One such method uses a Network Load Balancer (NLB). You can configure an NLB on a new or existing AWS cluster. 26.6.2.1. Switching the Ingress Controller from using a Classic Load Balancer to a Network Load Balancer You can switch the Ingress Controller that is using a Classic Load Balancer (CLB) to one that uses a Network Load Balancer (NLB) on AWS. Switching between these load balancers will not delete the IngressController object. Warning This procedure might cause the following issues: An outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Leaked load balancer resources due to a change in the annotation of the service. Procedure Modify the existing Ingress Controller that you want to switch to using an NLB. This example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yaml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Note If you do not specify a value for the spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.type field, the Ingress Controller uses the spec.loadBalancer.platform.aws.type value from the cluster Ingress configuration that was set during installation. Tip If your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead. Apply the changes to the Ingress Controller YAML file by running the command: USD oc apply -f ingresscontroller.yaml Expect several minutes of outages while the Ingress Controller updates. 26.6.2.2. Switching the Ingress Controller from using a Network Load Balancer to a Classic Load Balancer You can switch the Ingress Controller that is using a Network Load Balancer (NLB) to one that uses a Classic Load Balancer (CLB) on AWS. Switching between these load balancers will not delete the IngressController object. Warning This procedure might cause an outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Procedure Modify the existing Ingress Controller that you want to switch to using a CLB. This example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yaml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic type: LoadBalancerService Note If you do not specify a value for the spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.type field, the Ingress Controller uses the spec.loadBalancer.platform.aws.type value from the cluster Ingress configuration that was set during installation. Tip If your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead. Apply the changes to the Ingress Controller YAML file by running the command: USD oc apply -f ingresscontroller.yaml Expect several minutes of outages while the Ingress Controller updates. 26.6.2.3. Replacing Ingress Controller Classic Load Balancer with Network Load Balancer You can replace an Ingress Controller that is using a Classic Load Balancer (CLB) with one that uses a Network Load Balancer (NLB) on AWS. Warning This procedure might cause the following issues: An outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Leaked load balancer resources due to a change in the annotation of the service. Procedure Create a file with a new default Ingress Controller. The following example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService If your default Ingress Controller has other customizations, ensure that you modify the file accordingly. Tip If your Ingress Controller has no other customizations and you are only updating the load balancer type, consider following the procedure detailed in "Switching the Ingress Controller from using a Classic Load Balancer to a Network Load Balancer". Force replace the Ingress Controller YAML file: USD oc replace --force --wait -f ingresscontroller.yml Wait until the Ingress Controller is replaced. Expect several of minutes of outages. 26.6.2.4. Configuring an Ingress Controller Network Load Balancer on an existing AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on an existing cluster. Prerequisites You must have an installed AWS cluster. PlatformStatus of the infrastructure resource must be AWS. To verify that the PlatformStatus is AWS, run: USD oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS Procedure Create an Ingress Controller backed by an AWS NLB on an existing cluster. Create the Ingress Controller manifest: USD cat ingresscontroller-aws-nlb.yaml Example output apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB 1 Replace USDmy_ingress_controller with a unique name for the Ingress Controller. 2 Replace USDmy_unique_ingress_domain with a domain name that is unique among all Ingress Controllers in the cluster. This variable must be a subdomain of the DNS name <clustername>.<domain> . 3 You can replace External with Internal to use an internal NLB. Create the resource in the cluster: USD oc create -f ingresscontroller-aws-nlb.yaml Important Before you can configure an Ingress Controller NLB on a new AWS cluster, you must complete the Creating the installation configuration file procedure. 26.6.2.5. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Create an Ingress Controller backed by an AWS NLB on a new cluster. Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Save the cluster-ingress-default-ingresscontroller.yaml file and quit the text editor. Optional: Back up the manifests/cluster-ingress-default-ingresscontroller.yaml file. The installation program deletes the manifests/ directory when creating the cluster. 26.6.3. Additional resources Installing a cluster on AWS with network customizations . For more information on support for NLBs, see Network Load Balancer support on AWS . For more information on proxy protocol support for CLBs, see Configure proxy protocol support for your Classic Load Balancer 26.7. Configuring ingress cluster traffic for a service external IP You can use either a MetalLB implementation or an IP failover deployment to attach an ExternalIP resource to a service so that the service is available to traffic outside your OpenShift Container Platform cluster. Hosting an external IP address in this way is only applicable for a cluster installed on bare-metal hardware. You must ensure that you correctly configure the external network infrastructure to route traffic to the service. 26.7.1. Prerequisites Your cluster is configured with ExternalIPs enabled. For more information, read Configuring ExternalIPs for services . Note Do not use the same ExternalIP for the egress IP. 26.7.2. Attaching an ExternalIP to a service You can attach an ExternalIP resource to a service. If you configured your cluster to automatically attach the resource to a service, you might not need to manually attach an ExternalIP to the service. The examples in the procedure use a scenario that manually attaches an ExternalIP resource to a service in a cluster with an IP failover configuration. Procedure Confirm compatible IP address ranges for the ExternalIP resource by entering the following command in your CLI: USD oc get networks.config cluster -o jsonpath='{.spec.externalIP}{"\n"}' Note If autoAssignCIDRs is set and you did not specify a value for spec.externalIPs in the ExternalIP resource, OpenShift Container Platform automatically assigns ExternalIP to a new Service object. Choose one of the following options to attach an ExternalIP resource to the service: If you are creating a new service, specify a value in the spec.externalIPs field and array of one or more valid IP addresses in the allowedCIDRs parameter. Example of service YAML configuration file that supports an ExternalIP resource apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: policy: allowedCIDRs: - 192.168.123.0/28 If you are attaching an ExternalIP to an existing service, enter the following command. Replace <name> with the service name. Replace <ip_address> with a valid ExternalIP address. You can provide multiple IP addresses separated by commas. USD oc patch svc <name> -p \ '{ "spec": { "externalIPs": [ "<ip_address>" ] } }' For example: USD oc patch svc mysql-55-rhel7 -p '{"spec":{"externalIPs":["192.174.120.10"]}}' Example output "mysql-55-rhel7" patched To confirm that an ExternalIP address is attached to the service, enter the following command. If you specified an ExternalIP for a new service, you must create the service first. USD oc get svc Example output NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m 26.7.3. Additional resources About MetalLB and the MetalLB Operator Configuring IP failover Configuring ExternalIPs for services 26.8. Configuring ingress cluster traffic by using a NodePort OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a NodePort . 26.8.1. Using a NodePort to get traffic into the cluster Use a NodePort -type Service resource to expose a service on a specific port on all nodes in the cluster. The port is specified in the Service resource's .spec.ports[*].nodePort field. Important Using a node port requires additional port resources. A NodePort exposes the service on a static port on the node's IP address. NodePort s are in the 30000 to 32767 range by default, which means a NodePort is unlikely to match a service's intended port. For example, port 8080 may be exposed as port 31020 on the node. The administrator must ensure the external IP addresses are routed to the nodes. NodePort s and external IPs are independent and both can be used concurrently. Note The procedures in this section require prerequisites performed by the cluster administrator. 26.8.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: USD oc adm policy add-cluster-role-to-user cluster-admin <user_name> Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 26.8.3. Creating a project and service If the project and service that you want to expose does not exist, create the project and then create the service. If the project and service already exists, skip to the procedure on exposing the service to create a route. Prerequisites Install the OpenShift CLI ( oc ) and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project <project_name> Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n <project_name> Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s Note By default, the new service does not have an external IP address. 26.8.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Prerequisites You logged into OpenShift Container Platform. Procedure Log in to the project where the service you want to expose is located: USD oc project <project_name> To expose a node port for the application, modify the custom resource definition (CRD) of a service by entering the following command: USD oc edit svc <service_name> Example output spec: ports: - name: 8443-tcp nodePort: 30327 1 port: 8443 protocol: TCP targetPort: 8443 sessionAffinity: None type: NodePort 2 1 Optional: Specify the node port range for the application. By default, OpenShift Container Platform selects an available port in the 30000-32767 range. 2 Define the service type. Optional: To confirm the service is available with a node port exposed, enter the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s Optional: To remove the service created automatically by the oc new-app command, enter the following command: USD oc delete svc nodejs-ex Verification To check that the service node port is updated with a port in the 30000-32767 range, enter the following command: USD oc get svc In the following example output, the updated port is 30327 : Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s 26.8.5. Additional resources Configuring the node port service range Adding a single NodePort service to an Ingress Controller 26.9. Configuring ingress cluster traffic using load balancer allowed source ranges You can specify a list of IP address ranges for the IngressController . This restricts access to the load balancer service when the endpointPublishingStrategy is LoadBalancerService . 26.9.1. Configuring load balancer allowed source ranges You can enable and configure the spec.endpointPublishingStrategy.loadBalancer.allowedSourceRanges field. By configuring load balancer allowed source ranges, you can limit the access to the load balancer for the Ingress Controller to a specified list of IP address ranges. The Ingress Operator reconciles the load balancer Service and sets the spec.loadBalancerSourceRanges field based on AllowedSourceRanges . Note If you have already set the spec.loadBalancerSourceRanges field or the load balancer service anotation service.beta.kubernetes.io/load-balancer-source-ranges in a version of OpenShift Container Platform, Ingress Controller starts reporting Progressing=True after an upgrade. To fix this, set AllowedSourceRanges that overwrites the spec.loadBalancerSourceRanges field and clears the service.beta.kubernetes.io/load-balancer-source-ranges annotation. Ingress Controller starts reporting Progressing=False again. Prerequisites You have a deployed Ingress Controller on a running cluster. Procedure Set the allowed source ranges API for the Ingress Controller by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"type":"LoadBalancerService", "loadbalancer": \ {"scope":"External", "allowedSourceRanges":["0.0.0.0/0"]}}}}' 1 1 The example value 0.0.0.0/0 specifies the allowed source range. 26.9.2. Migrating to load balancer allowed source ranges If you have already set the annotation service.beta.kubernetes.io/load-balancer-source-ranges , you can migrate to load balancer allowed source ranges. When you set the AllowedSourceRanges , the Ingress Controller sets the spec.loadBalancerSourceRanges field based on the AllowedSourceRanges value and unsets the service.beta.kubernetes.io/load-balancer-source-ranges annotation. Note If you have already set the spec.loadBalancerSourceRanges field or the load balancer service anotation service.beta.kubernetes.io/load-balancer-source-ranges in a version of OpenShift Container Platform, the Ingress Controller starts reporting Progressing=True after an upgrade. To fix this, set AllowedSourceRanges that overwrites the spec.loadBalancerSourceRanges field and clears the service.beta.kubernetes.io/load-balancer-source-ranges annotation. The Ingress Controller starts reporting Progressing=False again. Prerequisites You have set the service.beta.kubernetes.io/load-balancer-source-ranges annotation. Procedure Ensure that the service.beta.kubernetes.io/load-balancer-source-ranges is set: USD oc get svc router-default -n openshift-ingress -o yaml Example output apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32 Ensure that the spec.loadBalancerSourceRanges field is unset: USD oc get svc router-default -n openshift-ingress -o yaml Example output ... spec: loadBalancerSourceRanges: - 0.0.0.0/0 ... Update your cluster to OpenShift Container Platform 4.12. Set the allowed source ranges API for the ingresscontroller by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"allowedSourceRanges":["0.0.0.0/0"]}}}}' 1 1 The example value 0.0.0.0/0 specifies the allowed source range. 26.9.3. Additional resources Updating your cluster 26.10. Patching existing ingress objects You can update or modify the following fields of existing Ingress objects without recreating the objects or disrupting services to them: Specifications Host Path Backend services SSL/TLS settings Annotations 26.10.1. Patching Ingress objects to resolve an ingressWithoutClassName alert The ingressClassName field specifies the name of the IngressClass object. You must define the ingressClassName field for each Ingress object. If you have not defined the ingressClassName field for an Ingress object, you could experience routing issues. After 24 hours, you will receive an ingressWithoutClassName alert to remind you to set the ingressClassName field. Procedure Patch the Ingress objects with a completed ingressClassName field to ensure proper routing and functionality. List all IngressClass objects: USD oc get ingressclass List all Ingress objects in all namespaces: USD oc get ingress -A Patch the Ingress object: USD oc patch ingress/<ingress_name> --type=merge --patch '{"spec":{"ingressClassName":"openshift-default"}}' Replace <ingress_name> with the name of the Ingress object. This command patches the Ingress object to include the desired ingress class name. | [
"apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253",
"{ \"policy\": { \"allowedCIDRs\": [], \"rejectedCIDRs\": [] } }",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {}",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 externalIP: policy: {}",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2",
"policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: - 192.168.132.254/29",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32",
"oc describe networks.config cluster",
"oc edit networks.config cluster",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: 1",
"oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{\"\\n\"}}'",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: finops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - finance - ops",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: dev-router namespace: openshift-ingress-operator spec: namespaceSelector: matchLabels: name: dev",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: devops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - dev - ops",
"oc edit ingresscontroller -n openshift-ingress-operator default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: NotIn values: - finance - ops - dev",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"cat router-internal.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift",
"oc -n hello-openshift create -f hello-openshift-route.yaml",
"oc -n hello-openshift get routes/hello-openshift-edge -o yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"Internal\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default",
"oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"External\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_ic_name> 1 namespace: openshift-ingress-operator spec: replicas: 1 domain: <custom_ic_domain_name> 2 nodePlacement: nodeSelector: matchLabels: <key>: <value> 3 namespaceSelector: matchLabels: <key>: <value> 4 endpointPublishingStrategy: type: NodePortService",
"oc label node <node_name> <key>=<value> 1",
"oc create -f <ingress_controller_cr>.yaml",
"oc get svc -n openshift-ingress",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m",
"oc new-project <project_name>",
"oc label namespace <project_name> <key>=<value> 1",
"oc new-app --image=<image_name> 1",
"oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> 1",
"oc get route/hello-openshift -o json | jq '.status.ingress'",
"{ \"conditions\": [ { \"lastTransitionTime\": \"2024-05-17T18:25:41Z\", \"status\": \"True\", \"type\": \"Admitted\" } ], [ { \"host\": \"hello-openshift.nodeportsvc.ipi-cluster.example.com\", \"routerCanonicalHostname\": \"router-nodeportsvc.nodeportsvc.ipi-cluster.example.com\", \"routerName\": \"nodeportsvc\", \"wildcardPolicy\": \"None\" } ], }",
"oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"namespaceSelector\":{\"matchExpressions\":[{\"key\":\"<key>\",\"operator\":\"NotIn\",\"values\":[\"<value>]}]}}}'",
"dig +short <svc_name>-<project_name>.<custom_ic_domain_name>",
"curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> 1",
"Hello OpenShift!",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"oc project project1",
"apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: loadBalancerSourceRanges: 3 - 10.0.0.0/8 - 192.168.0.0/16 type: LoadBalancer 4 selector: name: mysql 5",
"oc create -f <file-name>",
"oc create -f mysql-lb.yaml",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m",
"curl <public-ip>:<port>",
"curl 172.29.121.74:3306",
"mysql -h 172.30.131.89 -u admin -p",
"Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. MySQL [(none)]>",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadBalancer\": {\"scope\":\"External\", \"providerParameters\":{\"type\":\"AWS\", \"aws\": {\"type\":\"Classic\", \"classicLoadBalancer\": {\"connectionIdleTimeout\":\"5m\"}}}}}}}'",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"providerParameters\":{\"aws\":{\"classicLoadBalancer\": {\"connectionIdleTimeout\":null}}}}}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc apply -f ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic type: LoadBalancerService",
"oc apply -f ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc replace --force --wait -f ingresscontroller.yml",
"oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS",
"cat ingresscontroller-aws-nlb.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB",
"oc create -f ingresscontroller-aws-nlb.yaml",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc get networks.config cluster -o jsonpath='{.spec.externalIP}{\"\\n\"}'",
"apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: policy: allowedCIDRs: - 192.168.123.0/28",
"oc patch svc <name> -p '{ \"spec\": { \"externalIPs\": [ \"<ip_address>\" ] } }'",
"oc patch svc mysql-55-rhel7 -p '{\"spec\":{\"externalIPs\":[\"192.174.120.10\"]}}'",
"\"mysql-55-rhel7\" patched",
"oc get svc",
"NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m",
"oc adm policy add-cluster-role-to-user cluster-admin <user_name>",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc edit svc <service_name>",
"spec: ports: - name: 8443-tcp nodePort: 30327 1 port: 8443 protocol: TCP targetPort: 8443 sessionAffinity: None type: NodePort 2",
"oc get svc -n myproject",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s",
"oc delete svc nodejs-ex",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadbalancer\": {\"scope\":\"External\", \"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1",
"oc get svc router-default -n openshift-ingress -o yaml",
"apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32",
"oc get svc router-default -n openshift-ingress -o yaml",
"spec: loadBalancerSourceRanges: - 0.0.0.0/0",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1",
"oc get ingressclass",
"oc get ingress -A",
"oc patch ingress/<ingress_name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/configuring-ingress-cluster-traffic |
Chapter 11. Live migration | Chapter 11. Live migration 11.1. Virtual machine live migration 11.1.1. About live migration Live migration is the process of moving a running virtual machine instance (VMI) to another node in the cluster without interrupting the virtual workload or access. If a VMI uses the LiveMigrate eviction strategy, it automatically migrates when the node that the VMI runs on is placed into maintenance mode. You can also manually start live migration by selecting a VMI to migrate. You can use live migration if the following conditions are met: Shared storage with ReadWriteMany (RWX) access mode. Sufficient RAM and network bandwidth. If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU. By default, live migration traffic is encrypted using Transport Layer Security (TLS). 11.1.2. Additional resources Migrating a virtual machine instance to another node Live migration limiting Customizing the storage profile 11.2. Live migration limits and timeouts Apply live migration limits and timeouts so that migration processes do not overwhelm the cluster. Configure these settings by editing the HyperConverged custom resource (CR). 11.2.1. Configuring live migration limits and timeouts Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary live migration parameters. USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 1 In this example, the spec.liveMigrationConfig array contains the default values for each field. Note You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150 . 11.2.2. Cluster-wide live migration limits and timeouts Table 11.1. Migration parameters Parameter Description Default parallelMigrationsPerCluster Number of migrations running in parallel in the cluster. 5 parallelOutboundMigrationsPerNode Maximum number of outbound migrations per node. 2 bandwidthPerMigration Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of 2048Mi means 2048 MiB/s. 0 [1] completionTimeoutPerGiB The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a virtual machine instance with 6GiB memory times out if it has not completed migration in 4800 seconds. If the Migration Method is BlockMigration , the size of the migrating disks is included in the calculation. 800 progressTimeout The migration is canceled if memory copy fails to make progress in this time, in seconds. 150 The default value of 0 is unlimited. 11.3. Migrating a virtual machine instance to another node Manually initiate a live migration of a virtual machine instance to another node using either the web console or the CLI. Note If a virtual machine uses a host model CPU, you can perform live migration of that virtual machine only between nodes that support its host CPU model. 11.3.1. Initiating live migration of a virtual machine instance in the web console Migrate a running virtual machine instance to a different node in the cluster. Note The Migrate action is visible to all users but only admin users can initiate a virtual machine migration. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. You can initiate the migration from this page, which makes it easier to perform actions on multiple virtual machines on the same page, or from the VirtualMachine details page where you can view comprehensive details of the selected virtual machine: Click the Options menu to the virtual machine and select Migrate . Click the virtual machine name to open the VirtualMachine details page and click Actions Migrate . Click Migrate to migrate the virtual machine to another node. 11.3.2. Initiating live migration of a virtual machine instance in the CLI Initiate a live migration of a running virtual machine instance by creating a VirtualMachineInstanceMigration object in the cluster and referencing the name of the virtual machine instance. Procedure Create a VirtualMachineInstanceMigration configuration file for the virtual machine instance to migrate. For example, vmi-migrate.yaml : apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora Create the object in the cluster by running the following command: USD oc create -f vmi-migrate.yaml The VirtualMachineInstanceMigration object triggers a live migration of the virtual machine instance. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted. Additional resources: Monitoring live migration of a virtual machine instance Cancelling the live migration of a virtual machine instance 11.4. Migrating a virtual machine over a dedicated additional network You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 11.4.1. Configuring a dedicated secondary network for virtual machine live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition for the openshift-cnv namespace by using the CLI. Then, add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. The Multus Container Network Interface (CNI) plugin is installed on the cluster. Every node on the cluster has at least two Network Interface Cards (NICs), and the NICs to be used for live migration are connected to the same VLAN. The virtual machine (VM) is running with the LiveMigrate eviction strategy. Procedure Create a NetworkAttachmentDefinition manifest. Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv 2 spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 3 "mode": "bridge", "ipam": { "type": "whereabouts", 4 "range": "10.200.5.0/24" 5 } }' 1 The name of the NetworkAttachmentDefinition object. 2 The namespace where the NetworkAttachmentDefinition object resides. This must be openshift-cnv . 3 The name of the NIC to be used for live migration. 4 The name of the CNI plugin that provides the network for this network attachment definition. 5 The IP address range for the secondary network. This range must not have any overlap with the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR. For example: Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: my-secondary-network 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 ... 1 The name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 11.4.2. Additional resources Live migration limits and timeouts 11.5. Monitoring live migration of a virtual machine instance You can monitor the progress of a live migration of a virtual machine instance from either the web console or the CLI. 11.5.1. Monitoring live migration of a virtual machine instance in the web console For the duration of the migration, the virtual machine has a status of Migrating . This status is displayed on the VirtualMachines page or on the VirtualMachine details page of the migrating virtual machine. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. 11.5.2. Monitoring live migration of a virtual machine instance in the CLI The status of the virtual machine migration is stored in the Status component of the VirtualMachineInstance configuration. Procedure Use the oc describe command on the migrating virtual machine instance: USD oc describe vmi vmi-fedora Example output ... Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true 11.6. Cancelling the live migration of a virtual machine instance Cancel the live migration so that the virtual machine instance remains on the original node. You can cancel a live migration from either the web console or the CLI. 11.6.1. Cancelling live migration of a virtual machine instance in the web console You can cancel the live migration of a virtual machine instance in the web console. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu beside a virtual machine and select Cancel Migration . 11.6.2. Cancelling live migration of a virtual machine instance in the CLI Cancel the live migration of a virtual machine instance by deleting the VirtualMachineInstanceMigration object associated with the migration. Procedure Delete the VirtualMachineInstanceMigration object that triggered the live migration, migration-job in this example: USD oc delete vmim migration-job 11.7. Configuring virtual machine eviction strategy The LiveMigrate eviction strategy ensures that a virtual machine instance is not interrupted if the node is placed into maintenance or drained. Virtual machines instances with this eviction strategy will be live migrated to another node. 11.7.1. Configuring custom virtual machines with the LiveMigration eviction strategy You only need to configure the LiveMigration eviction strategy on custom virtual machines. Common templates have this eviction strategy configured by default. Procedure Add the evictionStrategy: LiveMigrate option to the spec.template.spec section in the virtual machine configuration file. This example uses oc edit to update the relevant snippet of the VirtualMachine configuration file: USD oc edit vm <custom-vm> -n <my-namespace> apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate ... Restart the virtual machine for the update to take effect: USD virtctl restart <custom-vm> -n <my-namespace> 11.8. Configuring live migration policies You can define different migration configurations for specified groups of virtual machine instances (VMIs) by using a live migration policy. Important Live migration policy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 11.8.1. Configuring a live migration policy Use the MigrationPolicy custom resource definition (CRD) to define migration policies for one or more groups of selected virtual machine instances (VMIs). You can specify groups of VMIs by using any combination of the following: Virtual machine instance labels such as size , os , gpu , and other VMI labels. Namespace labels such as priority , bandwidth , hpc-workload , and other namespace labels. For the policy to apply to a specific group of VMIs, all labels on the group of VMIs must match the labels in the policy. Note If multiple live migration policies apply to a VMI, the policy with the highest number of matching labels takes precedence. If multiple policies meet this criteria, the policies are sorted by lexicographic order of the matching labels keys, and the first one in that order takes precedence. Procedure Create a MigrationPolicy CRD for your specified group of VMIs. The following example YAML configures a group with the labels hpc-workloads:true , xyz-workloads-type: "" , workload-type: db , and operating-system: "" : apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: my-awesome-policy spec: # Migration Configuration allowAutoConverge: true bandwidthPerMigration: 217Ki completionTimeoutPerGiB: 23 allowPostCopy: false # Matching to VMIs selectors: namespaceSelector: 1 matchLabels: hpc-workloads: "True" xyz-workloads-type: "" virtualMachineInstanceSelector: 2 matchLabels: workload-type: "db" operating-system: "" 1 Use namespaceSelector to define a group of VMIs by using namespace labels. 2 Use virtualMachineInstanceSelector to define a group of VMIs by using VMI labels. | [
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150",
"apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora",
"oc create -f vmi-migrate.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 3 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 4 \"range\": \"10.200.5.0/24\" 5 } }'",
"edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: my-secondary-network 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150",
"get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'",
"oc describe vmi vmi-fedora",
"Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true",
"oc delete vmim migration-job",
"oc edit vm <custom-vm> -n <my-namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate",
"virtctl restart <custom-vm> -n <my-namespace>",
"apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: my-awesome-policy spec: # Migration Configuration allowAutoConverge: true bandwidthPerMigration: 217Ki completionTimeoutPerGiB: 23 allowPostCopy: false # Matching to VMIs selectors: namespaceSelector: 1 matchLabels: hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 matchLabels: workload-type: \"db\" operating-system: \"\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/virtualization/live-migration |
Chapter 26. Probe schema reference | Chapter 26. Probe schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaExporterSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TlsSidecar , ZookeeperClusterSpec Property Description failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. integer initialDelaySeconds The initial delay before first the health is first checked. Default to 15 seconds. Minimum value is 0. integer periodSeconds How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. integer successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. integer timeoutSeconds The timeout for each attempted health check. Default to 5 seconds. Minimum value is 1. integer | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-probe-reference |
Chapter 3. Managing alarms | Chapter 3. Managing alarms You can use the Telemetry Alarming service (aodh) to trigger actions based on defined rules against metric or event data collected by Ceilometer or Gnocchi. Alarms can be in one of the following states: ok The metric or event is in an acceptable state. firing The metric or event is outside of the defined ok state. insufficient data The alarm state is unknown. This can be for several reasons, for example, there is no data for the requested granularity, the check has not been executed yet, and so on. 3.1. Viewing existing alarms You can view existing Telemetry alarm information and list the meters assigned to a resource to check the current state of the metrics. Procedure List the existing Telemetry alarms: # openstack alarm list +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+ | 922f899c-27c8-4c7d-a2cf-107be51ca90a | gnocchi_aggregation_by_resources_threshold | iops-monitor-read-requests | insufficient data | low | True | +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+ To list the meters assigned to a resource, specify the UUID of the resource. For example: # openstack metric resource show 22592ae1-922a-4f51-b935-20c938f48753 | Field | Value | +-----------------------+-------------------------------------------------------------------+ | created_by_project_id | 1adaed3aaa7f454c83307688c0825978 | | created_by_user_id | d8429405a2764c3bb5184d29bd32c46a | | creator | d8429405a2764c3bb5184d29bd32c46a:1adaed3aaa7f454c83307688c0825978 | | ended_at | None | | id | 22592ae1-922a-4f51-b935-20c938f48753 | | metrics | cpu: a0375b0e-f799-47ea-b4ba-f494cf562ad8 | | | disk.ephemeral.size: cd082824-dfd6-49c3-afdf-6bfc8c12bd2a | | | disk.root.size: cd88dc61-ba85-45eb-a7b9-4686a6a0787b | | | memory.usage: 7a1e787c-5fa7-4ac3-a2c6-4c3821eaf80a | | | memory: ebd38ef7-cdc1-49f1-87c1-0b627d7c189e | | | vcpus: cc9353f1-bb24-4d37-ab8f-d3e887ca8856 | | original_resource_id | 22592ae1-922a-4f51-b935-20c938f48753 | | project_id | cdda46e0b5be4782bc0480dac280832a | | revision_end | None | | revision_start | 2021-09-16T17:00:41.227453+00:00 | | started_at | 2021-09-16T16:17:08.444032+00:00 | | type | instance | | user_id | f00de1d74408428cadf483ea7dbb2a83 | +-----------------------+-------------------------------------------------------------------+ 3.2. Creating an alarm Use the Telemetry Alarming service (aodh) to create an alarm that triggers when a particular condition is met, for example, when a threshold value is reached. In this example, the alarm activates and adds a log entry when the average CPU utilization for an individual instance exceeds 80%. Procedure Archive policies are pre-populated during the deployment process and you rarely need to create a new archive policy. However, if there is no configured archive policy, you must create one. To create an archive policy that creates metrics for 5s * 86400 points (5 days), use the following command: # openstack archive-policy create <name> \ -d granularity:5s,points:86400 \ -b 3 -m mean -m rate:mean + Replace <name> with the name of the archive policy. Note Ensure that you set the value of the evaluation period for the Telemetry Alarming service to an integer greater than 60. The Ceilometer polling interval is linked to the evaluation period. Ensure that you set the Ceilometer polling interval value to a number between 60 and 600 and ensure that the value is greater than the value of the evaluation period for the Telemetry Alarming service. If the Ceilometer polling interval is too low, it can severely impact system load. Create an alarm and use a query to isolate the specific ID of the instance for monitoring purposes. The ID of the instance in the following example is 94619081-abf5-4f1f-81c7-9cedaa872403. Note To calculate the threshold value, use the following formula: 1,000,000,000 x {granularity} x {percentage_in_decimal} # openstack alarm create \ --type gnocchi_aggregation_by_resources_threshold \ --name cpu_usage_high \ --granularity 5 --metric cpu \ --threshold 48000000000 \ --aggregation-method rate:mean \ --resource-type instance \ --query '{"=": {"id": "94619081-abf5-4f1f-81c7-9cedaa872403"}}' --alarm-action 'log://' +---------------------------+-------------------------------------------------------+ | Field | Value | +---------------------------+-------------------------------------------------------+ | aggregation_method | rate:mean | | alarm_actions | [u'log://'] | | alarm_id | b794adc7-ed4f-4edb-ace4-88cbe4674a94 | | comparison_operator | eq | | description | gnocchi_aggregation_by_resources_threshold alarm rule | | enabled | True | | evaluation_periods | 1 | | granularity | 5 | | insufficient_data_actions | [] | | metric | cpu | | name | cpu_usage_high | | ok_actions | [] | | project_id | 13c52c41e0e543d9841a3e761f981c20 | | query | {"=": {"id": "94619081-abf5-4f1f-81c7-9cedaa872403"}} | | repeat_actions | False | | resource_type | instance | | severity | low | | state | insufficient data | | state_timestamp | 2016-12-09T05:18:53.326000 | | threshold | 48000000000.0 | | time_constraints | [] | | timestamp | 2016-12-09T05:18:53.326000 | | type | gnocchi_aggregation_by_resources_threshold | | user_id | 32d3f2c9a234423cb52fb69d3741dbbc | +---------------------------+-------------------------------------------------------+ 3.3. Editing an alarm When you edit an alarm, you increase or decrease the value threshold of the alarm. Procedure To update the threshold value, use the openstack alarm update command. For example, to increase the alarm threshold to 75%, use the following command: # openstack alarm update --name cpu_usage_high --threshold 75 3.4. Disabling an alarm You can disable and enable alarms. Procedure Disable the alarm: 3.5. Deleting an alarm Use the openstack alarm delete command to delete an alarm. Procedure To delete an alarm, enter the following command: 3.6. Example: Monitoring the disk activity of instances This example demonstrates how to use an alarm that is part of the Telemetry Alarming service to monitor the cumulative disk activity for all the instances contained within a particular project. Procedure Review the existing projects and select the appropriate UUID of the project that you want to monitor. This example uses the admin tenant: USD openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 745d33000ac74d30a77539f8920555e7 | admin | | 983739bb834a42ddb48124a38def8538 | services | | be9e767afd4c4b7ead1417c6dfedde2b | demo | +----------------------------------+----------+ Use the project UUID to create an alarm that analyses the sum() of all read requests generated by the instances in the admin tenant. You can further restrain the query by using the --query parameter: # openstack alarm create \ --type gnocchi_aggregation_by_resources_threshold \ --name iops-monitor-read-requests \ --metric disk.read.requests.rate \ --threshold 42000 \ --aggregation-method sum \ --resource-type instance \ --query '{"=": {"project_id": "745d33000ac74d30a77539f8920555e7"}}' +---------------------------+-----------------------------------------------------------+ | Field | Value | +---------------------------+-----------------------------------------------------------+ | aggregation_method | sum | | alarm_actions | [] | | alarm_id | 192aba27-d823-4ede-a404-7f6b3cc12469 | | comparison_operator | eq | | description | gnocchi_aggregation_by_resources_threshold alarm rule | | enabled | True | | evaluation_periods | 1 | | granularity | 60 | | insufficient_data_actions | [] | | metric | disk.read.requests.rate | | name | iops-monitor-read-requests | | ok_actions | [] | | project_id | 745d33000ac74d30a77539f8920555e7 | | query | {"=": {"project_id": "745d33000ac74d30a77539f8920555e7"}} | | repeat_actions | False | | resource_type | instance | | severity | low | | state | insufficient data | | state_timestamp | 2016-11-08T23:41:22.919000 | | threshold | 42000.0 | | time_constraints | [] | | timestamp | 2016-11-08T23:41:22.919000 | | type | gnocchi_aggregation_by_resources_threshold | | user_id | 8c4aea738d774967b4ef388eb41fef5e | +---------------------------+-----------------------------------------------------------+ 3.7. Example: Monitoring CPU use To monitor the performance of an instance, examine the Gnocchi database to identify which metrics you can monitor, such as memory or CPU usage. Procedure To identify the metrics you can monitor, enter the openstack metric resource show command with an instance UUID: USD openstack metric resource show --type instance 22592ae1-922a-4f51-b935-20c938f48753 +-----------------------+-------------------------------------------------------------------+ | Field | Value | +-----------------------+-------------------------------------------------------------------+ | availability_zone | nova | | created_at | 2021-09-16T16:16:24+00:00 | | created_by_project_id | 1adaed3aaa7f454c83307688c0825978 | | created_by_user_id | d8429405a2764c3bb5184d29bd32c46a | | creator | d8429405a2764c3bb5184d29bd32c46a:1adaed3aaa7f454c83307688c0825978 | | deleted_at | None | | display_name | foo-2 | | ended_at | None | | flavor_id | 0e5bae38-a949-4509-9868-82b353ef7ffb | | flavor_name | workload_flavor_0 | | host | compute-0.redhat.local | | id | 22592ae1-922a-4f51-b935-20c938f48753 | | image_ref | 3cde20b4-7620-49f3-8622-eeacbdc43d49 | | launched_at | 2021-09-16T16:17:03+00:00 | | metrics | cpu: a0375b0e-f799-47ea-b4ba-f494cf562ad8 | | | disk.ephemeral.size: cd082824-dfd6-49c3-afdf-6bfc8c12bd2a | | | disk.root.size: cd88dc61-ba85-45eb-a7b9-4686a6a0787b | | | memory.usage: 7a1e787c-5fa7-4ac3-a2c6-4c3821eaf80a | | | memory: ebd38ef7-cdc1-49f1-87c1-0b627d7c189e | | | vcpus: cc9353f1-bb24-4d37-ab8f-d3e887ca8856 | | original_resource_id | 22592ae1-922a-4f51-b935-20c938f48753 | | project_id | cdda46e0b5be4782bc0480dac280832a | | revision_end | None | | revision_start | 2021-09-16T17:00:41.227453+00:00 | | server_group | None | | started_at | 2021-09-16T16:17:08.444032+00:00 | | type | instance | | user_id | f00de1d74408428cadf483ea7dbb2a83 | +-----------------------+-------------------------------------------------------------------+ In this result, the metrics value lists the components you can monitor with the Alarming service, for example cpu . To monitor CPU usage, use the cpu metric: USD openstack metric show --resource-id 22592ae1-922a-4f51-b935-20c938f48753 cpu +--------------------------------+-------------------------------------------------------------------+ | Field | Value | +--------------------------------+-------------------------------------------------------------------+ | archive_policy/name | ceilometer-high-rate | | creator | d8429405a2764c3bb5184d29bd32c46a:1adaed3aaa7f454c83307688c0825978 | | id | a0375b0e-f799-47ea-b4ba-f494cf562ad8 | | name | cpu | | resource/created_by_project_id | 1adaed3aaa7f454c83307688c0825978 | | resource/created_by_user_id | d8429405a2764c3bb5184d29bd32c46a | | resource/creator | d8429405a2764c3bb5184d29bd32c46a:1adaed3aaa7f454c83307688c0825978 | | resource/ended_at | None | | resource/id | 22592ae1-922a-4f51-b935-20c938f48753 | | resource/original_resource_id | 22592ae1-922a-4f51-b935-20c938f48753 | | resource/project_id | cdda46e0b5be4782bc0480dac280832a | | resource/revision_end | None | | resource/revision_start | 2021-09-16T17:00:41.227453+00:00 | | resource/started_at | 2021-09-16T16:17:08.444032+00:00 | | resource/type | instance | | resource/user_id | f00de1d74408428cadf483ea7dbb2a83 | | unit | ns | +--------------------------------+-------------------------------------------------------------------+ The archive_policy defines the aggregation interval for calculating the std, count, min, max, sum, mean values. Inspect the currently selected archive policy for the cpu metric: USD openstack metric archive-policy show ceilometer-high-rate +---------------------+-------------------------------------------------------------------+ | Field | Value | +---------------------+-------------------------------------------------------------------+ | aggregation_methods | rate:mean, mean | | back_window | 0 | | definition | - timespan: 1:00:00, granularity: 0:00:01, points: 3600 | | | - timespan: 1 day, 0:00:00, granularity: 0:01:00, points: 1440 | | | - timespan: 365 days, 0:00:00, granularity: 1:00:00, points: 8760 | | name | ceilometer-high-rate | +---------------------+-------------------------------------------------------------------+ Use the Alarming service to create a monitoring task that queries cpu . This task triggers events based on the settings that you specify. For example, to raise a log entry when the CPU of an instance spikes over 80% for an extended duration, use the following command: USD openstack alarm create \ --project-id 3cee262b907b4040b26b678d7180566b \ --name high-cpu \ --type gnocchi_resources_threshold \ --description 'High CPU usage' \ --metric cpu \ --threshold 800,000,000.0 \ --comparison-operator ge \ --aggregation-method mean \ --granularity 300 \ --evaluation-periods 1 \ --alarm-action 'log://' \ --ok-action 'log://' \ --resource-type instance \ --resource-id 22592ae1-922a-4f51-b935-20c938f48753 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | aggregation_method | rate:mean | | alarm_actions | ['log:'] | | alarm_id | c7b326bd-a68c-4247-9d2b-56d9fb18bf38 | | comparison_operator | ge | | description | High CPU usage | | enabled | True | | evaluation_periods | 1 | | granularity | 300 | | insufficient_data_actions | [] | | metric | cpu | | name | high-cpu | | ok_actions | ['log:'] | | project_id | cdda46e0b5be4782bc0480dac280832a | | repeat_actions | False | | resource_id | 22592ae1-922a-4f51-b935-20c938f48753 | | resource_type | instance | | severity | low | | state | insufficient data | | state_reason | Not evaluated yet | | state_timestamp | 2021-09-21T08:02:57.090592 | | threshold | 800000000.0 | | time_constraints | [] | | timestamp | 2021-09-21T08:02:57.090592 | | type | gnocchi_resources_threshold | | user_id | f00de1d74408428cadf483ea7dbb2a83 | +---------------------------+--------------------------------------+ comparison-operator: The ge operator defines that the alarm triggers if the CPU usage is greater than or equal to 80%. granularity: Metrics have an archive policy associated with them; the policy can have various granularities. For example, 5 minutes aggregation for 1 hour + 1 hour aggregation over a month. The granularity value must match the duration described in the archive policy. evaluation-periods: Number of granularity periods that need to pass before the alarm triggers. For example, if you set this value to 2, the CPU usage must be over 80% for two polling periods before the alarm triggers. [u'log://']: When you set alarm_actions or ok_actions to [u'log://'] , events, for example, the alarm is triggered or returns to a normal state, are recorded to the aodh log file. Note You can define different actions to run when an alarm is triggered (alarm_actions), and when it returns to a normal state (ok_actions), such as a webhook URL. 3.8. Viewing alarm history To check if a particular alarm has been triggered, you can query the alarm history and view the event information. Procedure Use the openstack alarm-history show command: USD openstack alarm-history show 1625015c-49b8-4e3f-9427-3c312a8615dd --fit-width +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ | timestamp | type | detail | event_id | +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ | 2017-11-16T05:21:47.850094 | state transition | {"transition_reason": "Transition to ok due to 1 samples inside threshold, most recent: 0.0366665763", "state": "ok"} | 3b51f09d-ded1-4807-b6bb-65fdc87669e4 | +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ | [
"openstack alarm list +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+ | 922f899c-27c8-4c7d-a2cf-107be51ca90a | gnocchi_aggregation_by_resources_threshold | iops-monitor-read-requests | insufficient data | low | True | +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+",
"openstack metric resource show 22592ae1-922a-4f51-b935-20c938f48753 | Field | Value | +-----------------------+-------------------------------------------------------------------+ | created_by_project_id | 1adaed3aaa7f454c83307688c0825978 | | created_by_user_id | d8429405a2764c3bb5184d29bd32c46a | | creator | d8429405a2764c3bb5184d29bd32c46a:1adaed3aaa7f454c83307688c0825978 | | ended_at | None | | id | 22592ae1-922a-4f51-b935-20c938f48753 | | metrics | cpu: a0375b0e-f799-47ea-b4ba-f494cf562ad8 | | | disk.ephemeral.size: cd082824-dfd6-49c3-afdf-6bfc8c12bd2a | | | disk.root.size: cd88dc61-ba85-45eb-a7b9-4686a6a0787b | | | memory.usage: 7a1e787c-5fa7-4ac3-a2c6-4c3821eaf80a | | | memory: ebd38ef7-cdc1-49f1-87c1-0b627d7c189e | | | vcpus: cc9353f1-bb24-4d37-ab8f-d3e887ca8856 | | original_resource_id | 22592ae1-922a-4f51-b935-20c938f48753 | | project_id | cdda46e0b5be4782bc0480dac280832a | | revision_end | None | | revision_start | 2021-09-16T17:00:41.227453+00:00 | | started_at | 2021-09-16T16:17:08.444032+00:00 | | type | instance | | user_id | f00de1d74408428cadf483ea7dbb2a83 | +-----------------------+-------------------------------------------------------------------+",
"openstack archive-policy create <name> -d granularity:5s,points:86400 -b 3 -m mean -m rate:mean",
"openstack alarm create --type gnocchi_aggregation_by_resources_threshold --name cpu_usage_high --granularity 5 --metric cpu --threshold 48000000000 --aggregation-method rate:mean --resource-type instance --query '{\"=\": {\"id\": \"94619081-abf5-4f1f-81c7-9cedaa872403\"}}' --alarm-action 'log://' +---------------------------+-------------------------------------------------------+ | Field | Value | +---------------------------+-------------------------------------------------------+ | aggregation_method | rate:mean | | alarm_actions | [u'log://'] | | alarm_id | b794adc7-ed4f-4edb-ace4-88cbe4674a94 | | comparison_operator | eq | | description | gnocchi_aggregation_by_resources_threshold alarm rule | | enabled | True | | evaluation_periods | 1 | | granularity | 5 | | insufficient_data_actions | [] | | metric | cpu | | name | cpu_usage_high | | ok_actions | [] | | project_id | 13c52c41e0e543d9841a3e761f981c20 | | query | {\"=\": {\"id\": \"94619081-abf5-4f1f-81c7-9cedaa872403\"}} | | repeat_actions | False | | resource_type | instance | | severity | low | | state | insufficient data | | state_timestamp | 2016-12-09T05:18:53.326000 | | threshold | 48000000000.0 | | time_constraints | [] | | timestamp | 2016-12-09T05:18:53.326000 | | type | gnocchi_aggregation_by_resources_threshold | | user_id | 32d3f2c9a234423cb52fb69d3741dbbc | +---------------------------+-------------------------------------------------------+",
"openstack alarm update --name cpu_usage_high --threshold 75",
"openstack alarm update --name cpu_usage_high --enabled=false",
"openstack alarm delete --name cpu_usage_high",
"openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 745d33000ac74d30a77539f8920555e7 | admin | | 983739bb834a42ddb48124a38def8538 | services | | be9e767afd4c4b7ead1417c6dfedde2b | demo | +----------------------------------+----------+",
"openstack alarm create --type gnocchi_aggregation_by_resources_threshold --name iops-monitor-read-requests --metric disk.read.requests.rate --threshold 42000 --aggregation-method sum --resource-type instance --query '{\"=\": {\"project_id\": \"745d33000ac74d30a77539f8920555e7\"}}' +---------------------------+-----------------------------------------------------------+ | Field | Value | +---------------------------+-----------------------------------------------------------+ | aggregation_method | sum | | alarm_actions | [] | | alarm_id | 192aba27-d823-4ede-a404-7f6b3cc12469 | | comparison_operator | eq | | description | gnocchi_aggregation_by_resources_threshold alarm rule | | enabled | True | | evaluation_periods | 1 | | granularity | 60 | | insufficient_data_actions | [] | | metric | disk.read.requests.rate | | name | iops-monitor-read-requests | | ok_actions | [] | | project_id | 745d33000ac74d30a77539f8920555e7 | | query | {\"=\": {\"project_id\": \"745d33000ac74d30a77539f8920555e7\"}} | | repeat_actions | False | | resource_type | instance | | severity | low | | state | insufficient data | | state_timestamp | 2016-11-08T23:41:22.919000 | | threshold | 42000.0 | | time_constraints | [] | | timestamp | 2016-11-08T23:41:22.919000 | | type | gnocchi_aggregation_by_resources_threshold | | user_id | 8c4aea738d774967b4ef388eb41fef5e | +---------------------------+-----------------------------------------------------------+",
"openstack metric resource show --type instance 22592ae1-922a-4f51-b935-20c938f48753 +-----------------------+-------------------------------------------------------------------+ | Field | Value | +-----------------------+-------------------------------------------------------------------+ | availability_zone | nova | | created_at | 2021-09-16T16:16:24+00:00 | | created_by_project_id | 1adaed3aaa7f454c83307688c0825978 | | created_by_user_id | d8429405a2764c3bb5184d29bd32c46a | | creator | d8429405a2764c3bb5184d29bd32c46a:1adaed3aaa7f454c83307688c0825978 | | deleted_at | None | | display_name | foo-2 | | ended_at | None | | flavor_id | 0e5bae38-a949-4509-9868-82b353ef7ffb | | flavor_name | workload_flavor_0 | | host | compute-0.redhat.local | | id | 22592ae1-922a-4f51-b935-20c938f48753 | | image_ref | 3cde20b4-7620-49f3-8622-eeacbdc43d49 | | launched_at | 2021-09-16T16:17:03+00:00 | | metrics | cpu: a0375b0e-f799-47ea-b4ba-f494cf562ad8 | | | disk.ephemeral.size: cd082824-dfd6-49c3-afdf-6bfc8c12bd2a | | | disk.root.size: cd88dc61-ba85-45eb-a7b9-4686a6a0787b | | | memory.usage: 7a1e787c-5fa7-4ac3-a2c6-4c3821eaf80a | | | memory: ebd38ef7-cdc1-49f1-87c1-0b627d7c189e | | | vcpus: cc9353f1-bb24-4d37-ab8f-d3e887ca8856 | | original_resource_id | 22592ae1-922a-4f51-b935-20c938f48753 | | project_id | cdda46e0b5be4782bc0480dac280832a | | revision_end | None | | revision_start | 2021-09-16T17:00:41.227453+00:00 | | server_group | None | | started_at | 2021-09-16T16:17:08.444032+00:00 | | type | instance | | user_id | f00de1d74408428cadf483ea7dbb2a83 | +-----------------------+-------------------------------------------------------------------+",
"openstack metric show --resource-id 22592ae1-922a-4f51-b935-20c938f48753 cpu +--------------------------------+-------------------------------------------------------------------+ | Field | Value | +--------------------------------+-------------------------------------------------------------------+ | archive_policy/name | ceilometer-high-rate | | creator | d8429405a2764c3bb5184d29bd32c46a:1adaed3aaa7f454c83307688c0825978 | | id | a0375b0e-f799-47ea-b4ba-f494cf562ad8 | | name | cpu | | resource/created_by_project_id | 1adaed3aaa7f454c83307688c0825978 | | resource/created_by_user_id | d8429405a2764c3bb5184d29bd32c46a | | resource/creator | d8429405a2764c3bb5184d29bd32c46a:1adaed3aaa7f454c83307688c0825978 | | resource/ended_at | None | | resource/id | 22592ae1-922a-4f51-b935-20c938f48753 | | resource/original_resource_id | 22592ae1-922a-4f51-b935-20c938f48753 | | resource/project_id | cdda46e0b5be4782bc0480dac280832a | | resource/revision_end | None | | resource/revision_start | 2021-09-16T17:00:41.227453+00:00 | | resource/started_at | 2021-09-16T16:17:08.444032+00:00 | | resource/type | instance | | resource/user_id | f00de1d74408428cadf483ea7dbb2a83 | | unit | ns | +--------------------------------+-------------------------------------------------------------------+",
"openstack metric archive-policy show ceilometer-high-rate +---------------------+-------------------------------------------------------------------+ | Field | Value | +---------------------+-------------------------------------------------------------------+ | aggregation_methods | rate:mean, mean | | back_window | 0 | | definition | - timespan: 1:00:00, granularity: 0:00:01, points: 3600 | | | - timespan: 1 day, 0:00:00, granularity: 0:01:00, points: 1440 | | | - timespan: 365 days, 0:00:00, granularity: 1:00:00, points: 8760 | | name | ceilometer-high-rate | +---------------------+-------------------------------------------------------------------+",
"openstack alarm create --project-id 3cee262b907b4040b26b678d7180566b --name high-cpu --type gnocchi_resources_threshold --description 'High CPU usage' --metric cpu --threshold 800,000,000.0 --comparison-operator ge --aggregation-method mean --granularity 300 --evaluation-periods 1 --alarm-action 'log://' --ok-action 'log://' --resource-type instance --resource-id 22592ae1-922a-4f51-b935-20c938f48753 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | aggregation_method | rate:mean | | alarm_actions | ['log:'] | | alarm_id | c7b326bd-a68c-4247-9d2b-56d9fb18bf38 | | comparison_operator | ge | | description | High CPU usage | | enabled | True | | evaluation_periods | 1 | | granularity | 300 | | insufficient_data_actions | [] | | metric | cpu | | name | high-cpu | | ok_actions | ['log:'] | | project_id | cdda46e0b5be4782bc0480dac280832a | | repeat_actions | False | | resource_id | 22592ae1-922a-4f51-b935-20c938f48753 | | resource_type | instance | | severity | low | | state | insufficient data | | state_reason | Not evaluated yet | | state_timestamp | 2021-09-21T08:02:57.090592 | | threshold | 800000000.0 | | time_constraints | [] | | timestamp | 2021-09-21T08:02:57.090592 | | type | gnocchi_resources_threshold | | user_id | f00de1d74408428cadf483ea7dbb2a83 | +---------------------------+--------------------------------------+",
"openstack alarm-history show 1625015c-49b8-4e3f-9427-3c312a8615dd --fit-width +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ | timestamp | type | detail | event_id | +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ | 2017-11-16T05:21:47.850094 | state transition | {\"transition_reason\": \"Transition to ok due to 1 samples inside threshold, most recent: 0.0366665763\", \"state\": \"ok\"} | 3b51f09d-ded1-4807-b6bb-65fdc87669e4 | +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/operational_measurements/managing-alarms_assembly |
Deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS with hosted control planes | Deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS with hosted control planes Red Hat OpenShift Data Foundation 4.17 Instructions for deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on ROSA with hosted control planes (HCP). | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/index |
Chapter 5. Scaling storage of VMware OpenShift Data Foundation cluster | Chapter 5. Scaling storage of VMware OpenShift Data Foundation cluster 5.1. Scaling up storage of VMware OpenShift Data Foundation cluster To increase the storage capacity in a dynamically created VMware storage cluster on user-provisioned and installer-provisioned infrastructures, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. You can scale up storage capacity of a VMware Red Hat OpenShift Data Foundation cluster in two ways: Scaling up storage capacity on a Vmware cluster by adding a new set of OSDs . Scaling up storage capacity on a VMware cluster by resizing existing OSDs . 5.1.1. Scaling up storage on a VMware cluster by adding a new set of OSDs To increase the storage capacity in a dynamically created storage cluster on a VMware user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disk is of the same size and type as the disk used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.1.2. Scaling up storage capacity on VMware a cluster by resizing existing OSDs To increase the storage capacity on a cluster, you can add storage capacity by resizing existing OSDs. Important Before resizing OSDs, verify that the underlying datastore has enough available space for the resize to succeed. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Update the dataPVCTemplate size for the storageDeviceSets with the new desired size using the oc patch command. In this example YAML, the storage parameter under storageDeviceSets reflects the current size of 512Gi . Using the oc patch command: Get the current OSD storage for the storageDeviceSets you are increasing storage for: Increase the storage with the desired value (the following example reflect the size change of 2Ti): Wait for the OSDs to restart. Confirm that the resize took effect: Verify that for all the resized OSDs, resize is completed and reflected correctly in the CAPACITY column of the command output. If the resize did not take effect, restart the OSD pods again. It may take multiple restarts for the resize to complete. 5.2. Scaling up a cluster created using local storage devices To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs. For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disks to be used for scaling are attached to the storage node Make sure that LocalVolumeDiscovery and LocalVolumeSet objects are created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.3. Scaling out storage capacity on a VMware cluster 5.3.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . 5.3.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . 5.3.3. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* To scale up storage capacity: For local storage devices, see Scaling up a cluster created using local storage devices | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"storageDeviceSets: - name: example-deviceset count: 3 resources: {} placement: {} dataPVCTemplate: spec: storageClassName: accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 512Gi",
"get storagecluster ocs-storagecluster -n openshift-storage -o jsonpath=' {.spec.storageDeviceSets[0].dataPVCTemplate.spec.resources.requests.storage} ' 512Gi",
"patch storagecluster ocs-storagecluster -n openshift-storage --type merge --patch \"USD(oc get storagecluster ocs-storagecluster -n openshift-storage -o jsonpath=' {.spec.storageDeviceSets[0]} ' | jq '.dataPVCTemplate.spec.resources.requests.storage=\"2Ti\"' | jq -c '{spec: {storageDeviceSets: [.]}}')\" storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get pvc -l ceph.rook.io/DeviceSet -n openshift-storage",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/scaling_storage/scaling_storage_of_vmware_openshift_data_foundation_cluster |
Chapter 2. Configuring the Cluster Samples Operator | Chapter 2. Configuring the Cluster Samples Operator The Cluster Samples Operator, which operates in the openshift namespace, installs and updates the Red Hat Enterprise Linux (RHEL)-based OpenShift Container Platform image streams and OpenShift Container Platform templates. 2.1. Understanding the Cluster Samples Operator During installation, the Operator creates the default configuration object for itself and then creates the sample image streams and templates, including quick start templates. Note To facilitate image stream imports from other registries that require credentials, a cluster administrator can create any additional secrets that contain the content of a Docker config.json file in the openshift namespace needed for image import. The Cluster Samples Operator configuration is a cluster-wide resource, and the deployment is contained within the openshift-cluster-samples-operator namespace. The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. When each sample is created or updated, the Cluster Samples Operator includes an annotation that denotes the version of OpenShift Container Platform. The Operator uses this annotation to ensure that each sample matches the release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator, where that version annotation is modified or deleted, are reverted automatically. Note The Jenkins images are part of the image payload from installation and are tagged into the image streams directly. The Cluster Samples Operator configuration resource includes a finalizer which cleans up the following upon deletion: Operator managed image streams. Operator managed templates. Operator generated configuration resources. Cluster status resources. Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration. 2.1.1. Cluster Samples Operator's use of management state The Cluster Samples Operator is bootstrapped as Managed by default or if global proxy is configured. In the Managed state, the Cluster Samples Operator is actively managing its resources and keeping the component active in order to pull sample image streams and images from the registry and ensure that the requisite sample templates are installed. Certain circumstances result in the Cluster Samples Operator bootstrapping itself as Removed including: If the Cluster Samples Operator cannot reach registry.redhat.io after three minutes on initial startup after a clean installation. If the Cluster Samples Operator detects it is on an IPv6 network. However, if the Cluster Samples Operator detects that it is on an IPv6 network and an OpenShift Container Platform global proxy is configured, then IPv6 check supersedes all the checks. As a result, the Cluster Samples Operator bootstraps itself as Removed . Important IPv6 installations are not currently supported by registry.redhat.io . The Cluster Samples Operator pulls most of the sample image streams and images from registry.redhat.io . 2.1.1.1. Restricted network installation Boostrapping as Removed when unable to access registry.redhat.io facilitates restricted network installations when the network restriction is already in place. Bootstrapping as Removed when network access is restricted allows the cluster administrator more time to decide if samples are desired, because the Cluster Samples Operator does not submit alerts that sample image stream imports are failing when the management state is set to Removed . When the Cluster Samples Operator comes up as Managed and attempts to install sample image streams, it starts alerting two hours after initial installation if there are failing imports. 2.1.1.2. Restricted network installation with initial network access Conversely, if a cluster that is intended to be a restricted network or disconnected cluster is first installed while network access exists, the Cluster Samples Operator installs the content from registry.redhat.io since it can access it. If you want the Cluster Samples Operator to still bootstrap as Removed in order to defer samples installation until you have decided which samples are desired, set up image mirrors, and so on, then follow the instructions for using the Samples Operator with an alternate registry and customizing nodes, both linked in the additional resources section, to override the Cluster Samples Operator default configuration and initially come up as Removed . You must put the following additional YAML file in the openshift directory created by openshift-install create manifest : Example Cluster Samples Operator YAML file with managementState: Removed apiVersion: samples.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: architectures: - x86_64 managementState: Removed 2.1.2. Cluster Samples Operator's tracking and error recovery of image stream imports After creation or update of a samples image stream, the Cluster Samples Operator monitors the progress of each image stream tag's image import. If an import fails, the Cluster Samples Operator retries the import through the image stream image import API, which is the same API used by the oc import-image command, approximately every 15 minutes until it sees the import succeed, or if the Cluster Samples Operator's configuration is changed such that either the image stream is added to the skippedImagestreams list, or the management state is changed to Removed . Additional resources If the Cluster Samples Operator is removed during installation, you can use the Cluster Samples Operator with an alternate registry so content can be imported, and then set the Cluster Samples Operator to Managed to get the samples. To ensure the Cluster Samples Operator bootstraps as Removed in a restricted network installation with initial network access to defer samples installation until you have decided which samples are desired, follow the instructions for customizing nodes to override the Cluster Samples Operator default configuration and initially come up as Removed . To host samples in your disconnected environment, follow the instructions for using the Cluster Samples Operator with an alternate registry . 2.1.3. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure. 2.2. Cluster Samples Operator configuration parameters The samples resource offers the following configuration fields: Parameter Description managementState Managed : The Cluster Samples Operator updates the samples as the configuration dictates. Unmanaged : The Cluster Samples Operator ignores updates to its configuration resource object and any image streams or templates in the openshift namespace. Removed : The Cluster Samples Operator removes the set of Managed image streams and templates in the openshift namespace. It ignores new samples created by the cluster administrator or any samples in the skipped lists. After the removals are complete, the Cluster Samples Operator works like it is in the Unmanaged state and ignores any watch events on the sample resources, image streams, or templates. samplesRegistry Allows you to specify which registry is accessed by image streams for their image content. samplesRegistry defaults to registry.redhat.io for OpenShift Container Platform. Note Creation or update of RHEL content does not commence if the secret for pull access is not in place when either Samples Registry is not explicitly set, leaving an empty string, or when it is set to registry.redhat.io. In both cases, image imports work off of registry.redhat.io, which requires credentials. Creation or update of RHEL content is not gated by the existence of the pull secret if the Samples Registry is overridden to a value other than the empty string or registry.redhat.io. architectures Placeholder to choose an architecture type. skippedImagestreams Image streams that are in the Cluster Samples Operator's inventory but that the cluster administrator wants the Operator to ignore or not manage. You can add a list of image stream names to this parameter. For example, ["httpd","perl"] . skippedTemplates Templates that are in the Cluster Samples Operator's inventory, but that the cluster administrator wants the Operator to ignore or not manage. Secret, image stream, and template watch events can come in before the initial samples resource object is created, the Cluster Samples Operator detects and re-queues the event. 2.2.1. Configuration restrictions When the Cluster Samples Operator starts supporting multiple architectures, the architecture list is not allowed to be changed while in the Managed state. To change the architectures values, a cluster administrator must: Mark the Management State as Removed , saving the change. In a subsequent change, edit the architecture and change the Management State back to Managed . The Cluster Samples Operator still processes secrets while in Removed state. You can create the secret before switching to Removed , while in Removed before switching to Managed , or after switching to Managed state. There are delays in creating the samples until the secret event is processed if you create the secret after switching to Managed . This helps facilitate the changing of the registry, where you choose to remove all the samples before switching to insure a clean slate. Removing all samples before switching is not required. 2.2.2. Conditions The samples resource maintains the following conditions in its status: Condition Description SamplesExists Indicates the samples are created in the openshift namespace. ImageChangesInProgress True when image streams are created or updated, but not all of the tag spec generations and tag status generations match. False when all of the generations match, or unrecoverable errors occurred during import, the last seen error is in the message field. The list of pending image streams is in the reason field. This condition is deprecated in OpenShift Container Platform. ConfigurationValid True or False based on whether any of the restricted changes noted previously are submitted. RemovePending Indicator that there is a Management State: Removed setting pending, but the Cluster Samples Operator is waiting for the deletions to complete. ImportImageErrorsExist Indicator of which image streams had errors during the image import phase for one of their tags. True when an error has occurred. The list of image streams with an error is in the reason field. The details of each error reported are in the message field. MigrationInProgress True when the Cluster Samples Operator detects that the version is different than the Cluster Samples Operator version with which the current samples set are installed. This condition is deprecated in OpenShift Container Platform. 2.3. Accessing the Cluster Samples Operator configuration You can configure the Cluster Samples Operator by editing the file with the provided parameters. Prerequisites Install the OpenShift CLI ( oc ). Procedure Access the Cluster Samples Operator configuration: USD oc edit configs.samples.operator.openshift.io/cluster -o yaml The Cluster Samples Operator configuration resembles the following example: apiVersion: samples.operator.openshift.io/v1 kind: Config ... 2.4. Removing deprecated image stream tags from the Cluster Samples Operator The Cluster Samples Operator leaves deprecated image stream tags in an image stream because users can have deployments that use the deprecated image stream tags. You can remove deprecated image stream tags by editing the image stream with the oc tag command. Note Deprecated image stream tags that the samples providers have removed from their image streams are not included on initial installations. Prerequisites You installed the oc CLI. Procedure Remove deprecated image stream tags by editing the image stream with the oc tag command. USD oc tag -d <image_stream_name:tag> Example output Deleted tag default/<image_stream_name:tag>. Additional resources For more information about configuring credentials, see Using image pull secrets . | [
"apiVersion: samples.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: architectures: - x86_64 managementState: Removed",
"oc edit configs.samples.operator.openshift.io/cluster -o yaml",
"apiVersion: samples.operator.openshift.io/v1 kind: Config",
"oc tag -d <image_stream_name:tag>",
"Deleted tag default/<image_stream_name:tag>."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/images/configuring-samples-operator |
Chapter 5. Performing health checks on Red Hat Quay deployments | Chapter 5. Performing health checks on Red Hat Quay deployments Health check mechanisms are designed to assess the health and functionality of a system, service, or component. Health checks help ensure that everything is working correctly, and can be used to identify potential issues before they become critical problems. By monitoring the health of a system, Red Hat Quay administrators can address abnormalities or potential failures for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Performing health checks can also help reduce the likelihood of encountering troubleshooting scenarios. Health check mechanisms can play a role in diagnosing issues by providing valuable information about the system's current state. By comparing health check results with expected benchmarks or predefined thresholds, deviations or anomalies can be identified quicker. 5.1. Red Hat Quay health check endpoints Important Links contained herein to any external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or its entities, products, or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. Red Hat Quay has several health check endpoints. The following table shows you the health check, a description, an endpoint, and an example output. Table 5.1. Health check endpoints Health check Description Endpoint Example output instance The instance endpoint acquires the entire status of the specific Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , disk_space , registry_gunicorn , service_key , and web_gunicorn. Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/instance or https://{quay-ip-endpoint}/health {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} endtoend The endtoend endpoint conducts checks on all services of your Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , redis , storage . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/endtoend {"data":{"services":{"auth":true,"database":true,"redis":true,"storage":true}},"status_code":200} warning The warning endpoint conducts a check on the warnings. Returns a dict with key-value pairs for the following: disk_space_warning . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/warning {"data":{"services":{"disk_space_warning":true}},"status_code":503} 5.2. Navigating to a Red Hat Quay health check endpoint Use the following procedure to navigate to the instance endpoint. This procedure can be repeated for endtoend and warning endpoints. Procedure On your web browser, navigate to https://{quay-ip-endpoint}/health/instance . You are taken to the health instance page, which returns information like the following: {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} For Red Hat Quay, "status_code": 200 means that the instance is health. Conversely, if you receive "status_code": 503 , there is an issue with your deployment. | [
"{\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/troubleshooting_red_hat_quay/health-check-quay |
5.4.15. Shrinking Logical Volumes | 5.4.15. Shrinking Logical Volumes You can reduce the size of a logical volume with the lvreduce command. Note Shrinking is not supported on a GFS2 or XFS file system, so you cannot reduce the size of a logical volume that contains a GFS2 or XFS file system. If the logical volume you are reducing contains a file system, to prevent data loss you must ensure that the file system is not using the space in the logical volume that is being reduced. For this reason, it is recommended that you use the --resizefs option of the lvreduce command when the logical volume contains a file system. When you use this option, the lvreduce command attempts to reduce the file system before shrinking the logical voume. If shrinking the file sytem fails, as can occur if the file system is full or the file system does not support shrinking, then the lvreduce command will fail and not attempt to shrink the logical volume. Warning In most cases, the lvreduce command warns about possible data loss and asks for a confirmation. However, you should not rely on these confirmation prompts to prevent data loss because in some cases you will not see these prompts, such as when the logical volume is inactive or the --resizefs option is not used. Note that using the --test option of the lvreduce command does not indicate where the operation is safe, as this option does not check the file system or test the file system resize. The following command shrinks the logical volume lvol1 in volume group vg00 to be 64 megabytes. In this example, lvol1 contains a file system, which this command resizes together with the logical volume. This example shows the output to the command. Specifying the - sign before the resize value indicates that the value will be subtracted from the logical volume's actual size. The following example shows the command you would use if, instead of shrinking a logical volume to an absolute size of 64 megabytes, you wanted to shrink the volume by a value 64 megabytes. | [
"lvreduce --resizefs -L 64M vg00/lvol1 fsck from util-linux 2.23.2 /dev/mapper/vg00-lvol1: clean, 11/25688 files, 8896/102400 blocks resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/mapper/vg00-lvol1 to 65536 (1k) blocks. The filesystem on /dev/mapper/vg00-lvol1 is now 65536 blocks long. Size of logical volume vg00/lvol1 changed from 100.00 MiB (25 extents) to 64.00 MiB (16 extents). Logical volume vg00/lvol1 successfully resized.",
"lvreduce --resizefs -L -64M vg00/lvol1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lv_reduce |
18.5. Installing in the Graphical User Interface | 18.5. Installing in the Graphical User Interface The graphical installation interface is the preferred method of manually installing Red Hat Enterprise Linux. It allows you full control over all available settings, including custom partitioning and advanced storage configuration, and it is also localized to many languages other than English, allowing you to perform the entire installation in a different language. The graphical mode is used by default when you boot the system from local media (a CD, DVD or a USB flash drive). Figure 18.2. The Installation Summary Screen The sections below discuss each screen available in the installation process. Note that due to the installer's parallel nature, most of the screens do not have to be completed in the order in which they are described here. Each screen in the graphical interface contains a Help button. This button opens the Yelp help browser displaying the section of the Red Hat Enterprise Linux Installation Guide relevant to the current screen. You can also control the graphical installer with your keyboard. Following table shows you the shortcuts you can use. Table 18.2. Graphical installer keyboard shortcuts Shortcut keys Usage Tab and Shift + Tab Cycle through active control elements (buttons, check boxes, and so on.) on the current screen Up and Down Scroll through lists Left and Right Scroll through horizontal toolbars and table entries Space and Enter Select or remove a highlighted item from selection and expand and collapse drop-down menus Additionally, elements in each screen can be toggled using their respective shortcuts. These shortcuts are highlighted (underlined) when you hold down the Alt key; to toggle that element, press Alt + X , where X is the highlighted letter. Your current keyboard layout is displayed in the top right hand corner. Only one layout is configured by default; if you configure more than layout in the Keyboard Layout screen ( Section 18.10, "Keyboard Configuration" ), you can switch between them by clicking the layout indicator. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-graphical-mode-s390 |
14.7.2. Useful Websites | 14.7.2. Useful Websites http://acl.bestbits.at/ - Website for ACLs | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/acls_additional_resources-useful_websites |
Chapter 18. Installing on VMC | Chapter 18. Installing on VMC 18.1. Preparing to install on VMC 18.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use Telemetry, you configured the firewall to allow the sites required by your cluster. 18.1.2. Choosing a method to install OpenShift Container Platform on VMC You can install OpenShift Container Platform on VMC by using installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provide. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See the Installation process for more information about installer-provisioned and user-provisioned installation processes. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the VMC platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 18.1.2.1. Installer-provisioned infrastructure installation of OpenShift Container Platform on VMC Installer-provisioned infrastructure allows the installation program to pre-configure and automate the provisioning of resources required by OpenShift Container Platform. Installing a cluster on VMC : You can install OpenShift Container Platform on VMC by using installer-provisioned infrastructure installation with no customization. Installing a cluster on VMC with customizations : You can install OpenShift Container Platform on VMC by using installer-provisioned infrastructure installation with the default customization options. Installing a cluster on VMC with network customizations : You can install OpenShift Container Platform on installer-provisioned VMC infrastructure, with network customizations. You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on VMC in a restricted network : You can install a cluster on VMC infrastructure in a restricted network by creating an internal mirror of the installation release content. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 18.1.2.2. User-provisioned infrastructure installation of OpenShift Container Platform on VMC User-provisioned infrastructure requires the user to provision all resources required by OpenShift Container Platform. Installing a cluster on VMC with user-provisioned infrastructure : You can install OpenShift Container Platform on VMC infrastructure that you provision. Installing a cluster on VMC with user-provisioned infrastructure and network customizations : You can install OpenShift Container Platform on VMC infrastructure that you provision with customized network configuration options. Installing a cluster on VMC in a restricted network with user-provisioned infrastructure : OpenShift Container Platform can be installed on VMC infrastructure that you provision in a restricted network. 18.1.3. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 18.1. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 13 or later vSphere ESXi hosts 6.5 or later vCenter host 6.5 or later Important Installing a cluster on VMware vSphere version 6.7U2 or earlier and virtual hardware version 13 is now deprecated. These versions are still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. Table 18.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 18.1.4. Uninstalling an installer-provisioned infrastructure installation of OpenShift Container Platform on VMC Uninstalling a cluster on VMC that uses installer-provisioned infrastructure : You can remove a cluster that you deployed on VMC infrastructure that used installer-provisioned infrastructure. 18.2. Installing a cluster on VMC In OpenShift Container Platform version 4.9, you can install a cluster on VMware vSphere by deploying it to VMware Cloud (VMC) on AWS . Once you have configured your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 18.2.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 18.2.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 18.2.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 18.2.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.2.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 18.3. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 13 or later vSphere ESXi hosts 6.5 or later vCenter host 6.5 or later Important Installing a cluster on VMware vSphere version 6.7U2 or earlier and virtual hardware version 13 is now deprecated. These versions are still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. Table 18.4. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 18.2.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 18.5. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 18.6. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 18.7. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 18.2.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 18.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 18.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 18.3. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. You must configure the default gateway to use the DHCP server. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 18.8. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 18.2.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.2.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.2.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 18.2.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Important Use the openshift-install command from the bastion hosted in the VMC environment. Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 18.2.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 18.2.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.2.13. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 18.2.13.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 18.2.13.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 18.2.13.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when replicating to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 18.2.13.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 18.2.14. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 18.2.15. Steal clock accounting By default, the installation program provisions the cluster's virtual machines without enabling the steal clock accounting parameter ( stealclock.enabled ). Enabling steal clock accounting can help with troubleshooting cluster issues. After the cluster is deployed, use the vSphere Client to enable this parameter on each of the virtual machines. For more information, see this Red Hat knowledge base article . 18.2.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.2.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 18.3. Installing a cluster on VMC with customizations In OpenShift Container Platform version 4.9, you can install a cluster on your VMware vSphere instance using installer-provisioned infrastructure by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. To customize the OpenShift Container Platform installation, you modify parameters in the install-config.yaml file before you install the cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 18.3.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 18.3.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 18.3.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 18.3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.3.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 18.9. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 13 or later vSphere ESXi hosts 6.5 or later vCenter host 6.5 or later Important Installing a cluster on VMware vSphere version 6.7U2 or earlier and virtual hardware version 13 is now deprecated. These versions are still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. Table 18.10. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 18.3.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 18.11. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 18.12. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 18.13. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 18.3.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 18.4. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 18.5. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 18.6. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. You must configure the default gateway to use the DHCP server. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 18.14. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 18.3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.3.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.3.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 18.3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 18.3.10.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 18.3.10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.15. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.3.10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.16. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.3.10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.17. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.3.10.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 18.18. Additional VMware vSphere cluster parameters Parameter Description Values platform.vsphere.vCenter The fully-qualified hostname or IP address of the vCenter server. String platform.vsphere.username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String platform.vsphere.password The password for the vCenter user name. String platform.vsphere.datacenter The name of the datacenter to use in the vCenter instance. String platform.vsphere.defaultDatastore The name of the default datastore to use for provisioning volumes. String platform.vsphere.folder Optional . The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . platform.vsphere.network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String platform.vsphere.cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String platform.vsphere.apiVIP The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . platform.vsphere.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . 18.3.10.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 18.19. Optional VMware vSphere machine pool parameters Parameter Description Values platform.vsphere.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . platform.vsphere.osDisk.diskSizeGB The size of the disk in gigabytes. Integer platform.vsphere.cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer platform.vsphere.coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer platform.vsphere.memoryMB The size of a virtual machine's memory in megabytes. Integer 18.3.10.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 7 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 8 The cluster name that you specified in your DNS records. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. 18.3.10.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.3.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Important Use the openshift-install command from the bastion hosted in the VMC environment. Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 18.3.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 18.3.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.3.14. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 18.3.14.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 18.3.14.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 18.3.14.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when replicating to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 18.3.14.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 18.3.15. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 18.3.16. Steal clock accounting By default, the installation program provisions the cluster's virtual machines without enabling the steal clock accounting parameter ( stealclock.enabled ). Enabling steal clock accounting can help with troubleshooting cluster issues. After the cluster is deployed, use the vSphere Client to enable this parameter on each of the virtual machines. For more information, see this Red Hat knowledge base article . 18.3.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.3.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 18.4. Installing a cluster on VMC with network customizations In OpenShift Container Platform version 4.9, you can install a cluster on your VMware vSphere instance using installer-provisioned infrastructure with customized network configuration options by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. By customizing your OpenShift Container Platform network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 18.4.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 18.4.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 18.4.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 18.4.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.4.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 18.20. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 13 or later vSphere ESXi hosts 6.5 or later vCenter host 6.5 or later Important Installing a cluster on VMware vSphere version 6.7U2 or earlier and virtual hardware version 13 is now deprecated. These versions are still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. Table 18.21. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 18.4.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 18.22. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 18.23. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 18.24. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 18.4.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 18.7. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 18.8. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 18.9. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. You must configure the default gateway to use the DHCP server. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 18.25. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 18.4.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.4.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.4.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 18.4.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 18.4.10.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 18.4.10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.26. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.4.10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.27. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.4.10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.28. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.4.10.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 18.29. Additional VMware vSphere cluster parameters Parameter Description Values platform.vsphere.vCenter The fully-qualified hostname or IP address of the vCenter server. String platform.vsphere.username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String platform.vsphere.password The password for the vCenter user name. String platform.vsphere.datacenter The name of the datacenter to use in the vCenter instance. String platform.vsphere.defaultDatastore The name of the default datastore to use for provisioning volumes. String platform.vsphere.folder Optional . The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . platform.vsphere.network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String platform.vsphere.cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String platform.vsphere.apiVIP The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . platform.vsphere.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . 18.4.10.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 18.30. Optional VMware vSphere machine pool parameters Parameter Description Values platform.vsphere.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . platform.vsphere.osDisk.diskSizeGB The size of the disk in gigabytes. Integer platform.vsphere.cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer platform.vsphere.coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer platform.vsphere.memoryMB The size of a virtual machine's memory in megabytes. Integer 18.4.10.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 7 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 8 The cluster name that you specified in your DNS records. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. 18.4.10.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.4.11. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 18.4.12. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 18.4.13. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 18.4.13.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 18.31. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 18.32. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 18.33. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 18.34. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. Table 18.35. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 18.36. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 18.4.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Important Use the openshift-install command from the bastion hosted in the VMC environment. Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 18.4.15. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 18.4.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.4.17. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 18.4.17.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 18.4.17.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 18.4.17.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when replicating to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 18.4.17.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 18.4.18. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 18.4.19. Steal clock accounting By default, the installation program provisions the cluster's virtual machines without enabling the steal clock accounting parameter ( stealclock.enabled ). Enabling steal clock accounting can help with troubleshooting cluster issues. After the cluster is deployed, use the vSphere Client to enable this parameter on each of the virtual machines. For more information, see this Red Hat knowledge base article . 18.4.20. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.4.21. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 18.5. Installing a cluster on VMC in a restricted network In OpenShift Container Platform version 4.9, you can install a cluster on VMware vSphere infrastructure in a restricted network by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 18.5.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 18.5.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 18.5.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 18.5.3. About installations in restricted networks In OpenShift Container Platform 4.9, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 18.5.3.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 18.5.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.5.5. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 18.37. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 13 or later vSphere ESXi hosts 6.5 or later vCenter host 6.5 or later Important Installing a cluster on VMware vSphere version 6.7U2 or earlier and virtual hardware version 13 is now deprecated. These versions are still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. Table 18.38. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 18.5.6. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 18.39. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 18.40. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 18.41. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 18.5.7. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 18.10. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 18.11. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 18.12. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. You must configure the default gateway to use the DHCP server. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. The VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 18.42. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 18.5.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.5.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 18.5.10. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network VMware vSphere environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.9 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - vSphere image. Upload the image you downloaded to a location that is accessible from the bastion server. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 18.5.11. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.vsphere.clusterOSImage to the image location or name. For example: platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to provide the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry, which can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which look like this excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release To complete these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 18.5.11.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 18.5.11.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.43. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.5.11.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 18.44. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.5.11.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.45. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 18.5.11.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 18.46. Additional VMware vSphere cluster parameters Parameter Description Values platform.vsphere.vCenter The fully-qualified hostname or IP address of the vCenter server. String platform.vsphere.username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String platform.vsphere.password The password for the vCenter user name. String platform.vsphere.datacenter The name of the datacenter to use in the vCenter instance. String platform.vsphere.defaultDatastore The name of the default datastore to use for provisioning volumes. String platform.vsphere.folder Optional . The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . platform.vsphere.network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String platform.vsphere.cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String platform.vsphere.apiVIP The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . platform.vsphere.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . 18.5.11.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 18.47. Optional VMware vSphere machine pool parameters Parameter Description Values platform.vsphere.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . platform.vsphere.osDisk.diskSizeGB The size of the disk in gigabytes. Integer platform.vsphere.cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer platform.vsphere.coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer platform.vsphere.memoryMB The size of a virtual machine's memory in megabytes. Integer 18.5.11.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-48.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 7 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 8 The cluster name that you specified in your DNS records. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. 10 The location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that is accessible from the bastion server. 11 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 12 Provide the contents of the certificate file that you used for your mirror registry. 13 Provide the imageContentSources section from the output of the command to mirror the repository. 18.5.11.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.5.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Important Use the openshift-install command from the bastion hosted in the VMC environment. Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 18.5.13. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 18.5.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.5.15. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 18.5.16. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 18.5.16.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 18.5.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 18.5.16.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when replicating to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 18.5.17. Steal clock accounting By default, the installation program provisions the cluster's virtual machines without enabling the steal clock accounting parameter ( stealclock.enabled ). Enabling steal clock accounting can help with troubleshooting cluster issues. After the cluster is deployed, use the vSphere Client to enable this parameter on each of the virtual machines. For more information, see this Red Hat knowledge base article . 18.5.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.5.19. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . 18.6. Installing a cluster on VMC with user-provisioned infrastructure In OpenShift Container Platform version 4.9, you can install a cluster on VMware vSphere infrastructure that you provision by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 18.6.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 18.6.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 18.6.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 18.6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.6.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 18.48. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 13 or later vSphere ESXi hosts 6.5 or later vCenter host 6.5 or later Important Installing a cluster on VMware vSphere version 6.7U2 or earlier and virtual hardware version 13 is now deprecated. These versions are still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. Table 18.49. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 18.6.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 18.6.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 18.50. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 7.9, or RHEL 8.4. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 18.6.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 18.51. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. 18.6.5.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 18.6.5.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 18.6.5.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 18.6.5.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 18.52. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 18.53. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 18.54. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. 18.6.5.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 18.55. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 18.6.5.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 18.13. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 18.14. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 18.6.5.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Note Session persistence is not required for the API load balancer to function properly. Configure the following ports on both the front and back of the load balancers: Table 18.56. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 18.57. Application ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic 1936 The worker nodes that run the Ingress Controller pods, by default. You must configure the /healthz/ready endpoint for the ingress health check probe. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. 18.6.5.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Example 18.15. Sample API and application ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 In the example, the cluster name is ocp4 . 2 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 3 5 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 4 Port 22623 handles the machine config server traffic and points to the control plane machines. 6 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 7 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . 18.6.6. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 18.6.7. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 0 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 0 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 0 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 0 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 18.6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 18.6.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.6.10. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 18.6.10.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 The fully-qualified hostname or IP address of the vCenter server. 10 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. 11 The password associated with the vSphere user. 12 The vSphere datacenter. 13 The default vSphere datastore to use. 14 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 The pull secret that you obtained from OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 18.6.10.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.6.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 18.6.12. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware Cloud on AWS. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 18.6.13. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere: USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High . Ensure that your VM's CPU and memory reservation have the following values: Memory reservation value must be equal to its configured memory size. CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed. Click Edit Configuration , and on the Configuration Parameters window, search the list of available parameters for steal clock accounting ( stealclock.enable ). If it is available, set its value to TRUE . Enabling steal clock accounting can help with troubleshooting cluster issues. Click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 18.6.14. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 18.6.15. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 18.6.16. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 18.6.17. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 18.6.18. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 18.6.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.6.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 18.6.21. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Configure the Operators that are not available. 18.6.21.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 18.6.21.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 18.6.21.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when replicating to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 18.6.21.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 18.6.21.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 18.6.22. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 18.6.23. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 18.6.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.6.25. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 18.7. Installing a cluster on VMC with user-provisioned infrastructure and network customizations In OpenShift Container Platform version 4.9, you can install a cluster on your VMware vSphere instance using infrastructure you provision with customized network configuration options by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 18.7.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 18.7.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 18.7.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall, you configured it to allow the sites that your cluster requires access to. 18.7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.7.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 18.58. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 13 or later vSphere ESXi hosts 6.5 or later vCenter host 6.5 or later Important Installing a cluster on VMware vSphere version 6.7U2 or earlier and virtual hardware version 13 is now deprecated. These versions are still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. Table 18.59. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 18.7.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 18.7.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 18.60. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 7.9, or RHEL 8.4. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 18.7.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 18.61. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. 18.7.5.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 18.7.5.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 18.7.5.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 18.7.5.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 18.62. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 18.63. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 18.64. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. 18.7.5.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 18.65. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 18.7.5.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 18.16. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 18.17. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 18.7.5.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Note Session persistence is not required for the API load balancer to function properly. Configure the following ports on both the front and back of the load balancers: Table 18.66. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 18.67. Application ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic 1936 The worker nodes that run the Ingress Controller pods, by default. You must configure the /healthz/ready endpoint for the ingress health check probe. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. 18.7.5.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Example 18.18. Sample API and application ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 In the example, the cluster name is ocp4 . 2 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 3 5 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 4 Port 22623 handles the machine config server traffic and points to the control plane machines. 6 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 7 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . 18.7.6. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 18.7.7. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 0 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 0 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 0 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 0 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 18.7.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 18.7.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 18.7.10. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 18.7.10.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 The fully-qualified hostname or IP address of the vCenter server. 10 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. 11 The password associated with the vSphere user. 12 The vSphere datacenter. 13 The default vSphere datastore to use. 14 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 The pull secret that you obtained from OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 18.7.10.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.7.11. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 18.7.12. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 18.7.12.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 18.68. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 18.69. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 18.70. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 18.71. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. Table 18.72. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 18.73. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 18.7.13. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 18.7.14. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware Cloud on AWS. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 18.7.15. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere: USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High . Ensure that your VM's CPU and memory reservation have the following values: Memory reservation value must be equal to its configured memory size. CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed. Click Edit Configuration , and on the Configuration Parameters window, search the list of available parameters for steal clock accounting ( stealclock.enable ). If it is available, set its value to TRUE . Enabling steal clock accounting can help with troubleshooting cluster issues. Click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 18.7.16. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 18.7.17. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 18.7.18. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 18.7.19. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 18.7.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.7.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 18.7.22. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Configure the Operators that are not available. 18.7.22.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 18.7.22.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 18.7.22.2.1. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 18.7.23. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 18.7.24. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 18.7.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.7.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 18.8. Installing a cluster on VMC in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.9, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 18.8.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Configure the following firewall rules: An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 18.8.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 18.8.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 18.8.3. About installations in restricted networks In OpenShift Container Platform 4.9, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 18.8.3.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 18.8.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 18.8.5. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 18.74. Version requirements for vSphere virtual environments Virtual environment product Required version VM hardware version 13 or later vSphere ESXi hosts 6.5 or later vCenter host 6.5 or later Important Installing a cluster on VMware vSphere version 6.7U2 or earlier and virtual hardware version 13 is now deprecated. These versions are still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. Table 18.75. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 18.8.6. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 18.8.6.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 18.76. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 7.9, or RHEL 8.4. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 18.8.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 18.77. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. 18.8.6.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 18.8.6.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 18.8.6.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 18.8.6.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 18.78. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 18.79. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 18.80. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. 18.8.6.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 18.81. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 18.8.6.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 18.19. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 18.20. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 18.8.6.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Note Session persistence is not required for the API load balancer to function properly. Configure the following ports on both the front and back of the load balancers: Table 18.82. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 18.83. Application ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic 1936 The worker nodes that run the Ingress Controller pods, by default. You must configure the /healthz/ready endpoint for the ingress health check probe. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. 18.8.6.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Example 18.21. Sample API and application ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 In the example, the cluster name is ocp4 . 2 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 3 5 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 4 Port 22623 handles the machine config server traffic and points to the control plane machines. 6 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 7 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . 18.8.7. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 18.8.8. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 0 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 0 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 0 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 0 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 18.8.9. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 18.8.10. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 18.8.10.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14 fips: false 15 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 The fully-qualified hostname or IP address of the vCenter server. 10 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. 11 The password associated with the vSphere user. 12 The vSphere datacenter. 13 The default vSphere datastore to use. 14 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 18 Provide the contents of the certificate file that you used for your mirror registry. 19 Provide the imageContentSources section from the output of the command to mirror the repository. 18.8.10.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 18.8.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 18.8.12. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware Cloud on AWS. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 18.8.13. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere: USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High . Ensure that your VM's CPU and memory reservation have the following values: Memory reservation value must be equal to its configured memory size. CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed. Click Edit Configuration , and on the Configuration Parameters window, search the list of available parameters for steal clock accounting ( stealclock.enable ). If it is available, set its value to TRUE . Enabling steal clock accounting can help with troubleshooting cluster issues. Click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 18.8.14. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 18.8.15. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 18.8.16. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 18.8.17. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 18.8.18. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 18.8.19. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 18.8.20. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Configure the Operators that are not available. 18.8.20.1. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 18.8.20.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 18.8.20.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when replicating to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 18.8.20.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 18.8.20.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring registry storage for VMware vSphere . 18.8.21. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 18.8.22. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 18.8.23. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 18.8.24. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 18.9. Uninstalling a cluster on VMC You can remove a cluster installed on VMware vSphere infrastructure that you deployed to VMware Cloud (VMC) on AWS by using installer-provisioned infrastructure. 18.9.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-48.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 0 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 0 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 0 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/installing-on-vmc |
probe::socket.aio_write | probe::socket.aio_write Name probe::socket.aio_write - Message send via sock_aio_write Synopsis socket.aio_write Values flags Socket flags value type Socket type value size Message size in bytes family Protocol family value protocol Protocol value name Name of this probe state Socket state value Context The message sender Description Fires at the beginning of sending a message on a socket via the sock_aio_write function | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-aio-write |
Chapter 12. Deployment errors | Chapter 12. Deployment errors 12.1. Order of cleanup operations Depending on where deployment fails, you may need to perform a number of cleanup operations. Always perform cleanup for tasks in reverse order to the order of the tasks themselves. For example, during deployment, we perform the following tasks in order: Configure Network-Bound Disk Encryption using Ansible. Configure Red Hat Gluster Storage using the Web Console. Configure the Hosted Engine using the Web Console. If deployment fails at step 2, perform cleanup for step 2. Then, if necessary, perform cleanup for step 1. 12.2. Failed to deploy storage If an error occurs during storage deployment , the deployment process halts and Deployment failed is displayed. Deploying storage failed Review the Web Console output for error information. Click Clean up to remove any potentially incorrect changes to the system. If your deployment uses Network-Bound Disk Encryption, you must then follow the process in Cleaning up Network-Bound Disk Encryption after a failed deployment . Click Redeploy and correct any entered values that may have caused errors. If you need help resolving errors, contact Red Hat Support with details. Return to storage deployment to try again. 12.2.1. Cleaning up Network-Bound Disk Encryption after a failed deployment If you are using Network-Bound Disk Encryption and deployment fails, you cannot just click the Cleanup button in order to try again. You must also run the luks_device_cleanup.yml playbook to complete the cleaning process before you start again. Run this playbook as shown, providing the same luks_tang_inventory.yml file that you provided during setup. 12.2.2. Error: VDO signature detected on device During storage deployment, the Create VDO with specified size task may fail with the VDO signature detected on device error. This error occurs when the specified device is already a VDO device, or when the device was previously configured as a VDO device and was not cleaned up correctly. If you specified a VDO device accidentally , return to storage configuration and specify a different non-VDO device. If you specified a device that has been used as a VDO device previously: Check the device type. If you see TYPE="vdo" in the output, this device was not cleaned correctly. Follow the steps in Manually cleaning up a VDO device to use this device. Then return to storage deployment to try again. Avoid this error by specifying clean devices, and by using the Clean up button in the storage deployment window to clean up any failed deployments. 12.2.3. Manually cleaning up a VDO device Follow this process to manually clean up a VDO device that has caused a deployment failure. Warning This is a destructive process. You will lose all data on the device that you clean up. Procedure Clean the device using wipefs. Verify Confirm that the device does not have TYPE="vdo" set any more. steps Return to storage deployment to try again. 12.3. Failed to prepare virtual machine If an error occurs while preparing the virtual machine in deployment , deployment pauses, and you see a screen similar to the following: Preparing virtual machine failed Review the Web Console output for error information. Click Back and correct any entered values that may have caused errors. Ensure proper values for network configurations are provided in VM tab. If you need help resolving errors, contact Red Hat Support with details. Ensure that the rhvm-appliance package is available on the first hyperconverged host. Return to Hosted Engine deployment to try again. If you closed the deployment wizard while you resolved errors, you can select Use existing configuration when you retry the deployment process. 12.4. Failed to deploy hosted engine If an error occurs during hosted engine deployment, deployment pauses and Deployment failed is displayed. Hosted engine deployment failed Review the Web Console output for error information. Remove the contents of the engine volume. Mount the engine volume. Remove the contents of the volume. Unmount the engine volume. Click Redeploy and correct any entered values that may have caused errors. If the deployment fails after performing the above steps a, b and c. Perform these steps again and this time clean the Hosted Engine: Return to deployment to try again. If you closed the deployment wizard while you resolved errors, you can select Use existing configuration when you retry the deployment process. If you need help resolving errors, contact Red Hat Support with details. | [
"ansible-playbook -i luks_tang_inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/luks_device_cleanup.yml --ask-vault-pass",
"TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:9 failed: [host1.example.com] (item={u'writepolicy': u'auto', u'name': u'vdo_sdb', u'readcachesize': u'20M', u'readcache': u'enabled', u'emulate512': u'off', u'logicalsize': u'11000G', u'device': u'/dev/sdb', u'slabsize': u'32G', u'blockmapcachesize': u'128M'}) => {\"ansible_loop_var\": \"item\", \"changed\": false, \"err\": \" vdo: ERROR - vdo signature detected on /dev/sdb at offset 0; use --force to override\\n\", \"item\": {\"blockmapcachesize\": \"128M\", \"device\": \"/dev/sdb\", \"emulate512\": \"off\", \"logicalsize\": \"11000G\", \"name\": \"vdo_sdb\", \"readcache\": \"enabled\", \"readcachesize\": \"20M\", \"slabsize\": \"32G\", \"writepolicy\": \"auto\"}, \"msg\": \"Creating VDO vdo_sdb failed.\", \"rc\": 5}",
"blkid -p /dev/sdb /dev/sdb: UUID=\"fee52367-c2ca-4fab-a6e9-58267895fe3f\" TYPE=\"vdo\" USAGE=\"other\"",
"wipefs -a /dev/sdX",
"blkid -p /dev/sdb /dev/sdb: UUID=\"fee52367-c2ca-4fab-a6e9-58267895fe3f\" TYPE=\"vdo\" USAGE=\"other\"",
"yum install rhvm-appliance",
"mount -t glusterfs <server1>:/engine /mnt/test",
"rm -rf /mnt/test/*",
"umount /mnt/test",
"ovirt-hosted-engine-cleanup"
] | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/automating_rhhi_for_virtualization_deployment/tshoot-deploy-error |
Chapter 1. Guide Overview | Chapter 1. Guide Overview The purpose of this guide is to walk through the steps that need to be completed prior to booting up the Red Hat Single Sign-On server for the first time. If you just want to test drive Red Hat Single Sign-On, it pretty much runs out of the box with its own embedded and local-only database. For actual deployments that are going to be run in production you'll need to decide how you want to manage server configuration at runtime (standalone or domain mode), configure a shared database for Red Hat Single Sign-On storage, set up encryption and HTTPS, and finally set up Red Hat Single Sign-On to run in a cluster. This guide walks through each and every aspect of any pre-boot decisions and setup you must do prior to deploying the server. One thing to particularly note is that Red Hat Single Sign-On is derived from the JBoss EAP Application Server. Many aspects of configuring Red Hat Single Sign-On revolve around JBoss EAP configuration elements. Often this guide will direct you to documentation outside of the manual if you want to dive into more detail. 1.1. Recommended additional external documentation Red Hat Single Sign-On is built on top of the JBoss EAP application server and its sub-projects like Infinispan (for caching) and Hibernate (for persistence). This guide only covers basics for infrastructure-level configuration. It is highly recommended that you peruse the documentation for JBoss EAP and its sub projects. Here is the link to the documentation: JBoss EAP Configuration Guide | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_installation_and_configuration_guide/guide_overview |
Chapter 5. Scale [autoscaling/v1] | Chapter 5. Scale [autoscaling/v1] Description Scale represents a scaling request for a resource. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . spec object ScaleSpec describes the attributes of a scale subresource. status object ScaleStatus represents the current status of a scale subresource. 5.1.1. .spec Description ScaleSpec describes the attributes of a scale subresource. Type object Property Type Description replicas integer replicas is the desired number of instances for the scaled object. 5.1.2. .status Description ScaleStatus represents the current status of a scale subresource. Type object Required replicas Property Type Description replicas integer replicas is the actual number of observed instances of the scaled object. selector string selector is the label query over pods that should match the replicas count. This is same as the label selector but in the string format to avoid introspection by clients. The string will be in the same format as the query-param syntax. More info about label selectors: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ 5.2. API endpoints The following API endpoints are available: /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale GET : read scale of the specified Deployment PATCH : partially update scale of the specified Deployment PUT : replace scale of the specified Deployment /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale GET : read scale of the specified ReplicaSet PATCH : partially update scale of the specified ReplicaSet PUT : replace scale of the specified ReplicaSet /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale GET : read scale of the specified StatefulSet PATCH : partially update scale of the specified StatefulSet PUT : replace scale of the specified StatefulSet /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale GET : read scale of the specified ReplicationController PATCH : partially update scale of the specified ReplicationController PUT : replace scale of the specified ReplicationController 5.2.1. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale Table 5.1. Global path parameters Parameter Type Description name string name of the Scale HTTP method GET Description read scale of the specified Deployment Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified Deployment Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified Deployment Table 5.5. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.6. Body parameters Parameter Type Description body Scale schema Table 5.7. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.2. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale Table 5.8. Global path parameters Parameter Type Description name string name of the Scale HTTP method GET Description read scale of the specified ReplicaSet Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ReplicaSet Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ReplicaSet Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body Scale schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.3. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale Table 5.15. Global path parameters Parameter Type Description name string name of the Scale HTTP method GET Description read scale of the specified StatefulSet Table 5.16. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified StatefulSet Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified StatefulSet Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body Scale schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.4. /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale Table 5.22. Global path parameters Parameter Type Description name string name of the Scale HTTP method GET Description read scale of the specified ReplicationController Table 5.23. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ReplicationController Table 5.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.25. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ReplicationController Table 5.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.27. Body parameters Parameter Type Description body Scale schema Table 5.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/autoscale_apis/scale-autoscaling-v1 |
Chapter 9. Configuring TLS security profiles | Chapter 9. Configuring TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. This ensures that OpenShift Container Platform components use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. Cluster administrators can choose which TLS security profile to use for each of the following components: the Ingress Controller the control plane This includes the Kubernetes API server, OpenShift API server, OpenShift OAuth API server, and OpenShift OAuth server. 9.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 9.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Note In OpenShift Container Platform 4.6, 4.7, and 4.8, the Modern profile is unsupported. If selected, the Intermediate profile is enabled. Important The Modern profile is currently not supported. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note OpenShift Container Platform router enables Red Hat-distributed OpenSSL default set of TLS 1.3 cipher suites. Your cluster might accept TLS 1.3 connections and cipher suites, even though TLS 1.3 is unsupported in OpenShift Container Platform 4.6, 4.7, and 4.8. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 9.2. Viewing TLS security profile details You can view the minimum TLS version and ciphers for the predefined TLS security profiles for each of the following components: Ingress Controller and control plane. Important The effective configuration of minimum TLS version and list of ciphers for a profile might differ between components. Procedure View details for a specific TLS security profile: USD oc explain <component>.spec.tlsSecurityProfile.<profile> 1 1 For <component> , specify ingresscontroller or apiserver . For <profile> , specify old , intermediate , or custom . For example, to check the ciphers included for the intermediate profile for the control plane: USD oc explain apiserver.spec.tlsSecurityProfile.intermediate Example output KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 View all details for the tlsSecurityProfile field of a component: USD oc explain <component>.spec.tlsSecurityProfile 1 1 For <component> , specify ingresscontroller or apiserver . For example, to check all details for the tlsSecurityProfile field for the Ingress Controller: USD oc explain ingresscontroller.spec.tlsSecurityProfile Example output KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: ... FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string> ... 1 Lists ciphers and minimum version for the intermediate profile here. 2 Lists ciphers and minimum version for the modern profile here. 3 Lists ciphers and minimum version for the old profile here. 9.3. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Important The HAProxy Ingress Controller image does not support TLS 1.3 and because the Modern profile requires TLS 1.3 , it is not supported. The Ingress Operator converts the Modern profile to Intermediate . The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 , and TLS 1.3 of a Custom profile to 1.2 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 9.4. Configuring the TLS security profile for the control plane To configure a TLS security profile for the control plane, edit the APIServer custom resource (CR) to specify a predefined or custom TLS security profile. Setting the TLS security profile in the APIServer CR propagates the setting to the following control plane components: Kubernetes API server OpenShift API server OpenShift OAuth API server OpenShift OAuth server If a TLS security profile is not configured, the default TLS security profile is Intermediate . Note The default TLS security profile for the Ingress Controller is based on the TLS security profile set for the API server. Sample APIServer CR that configures the Old TLS security profile apiVersion: config.openshift.io/v1 kind: APIServer ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers required to communicate with the control plane components. You can see the configured TLS security profile in the APIServer custom resource (CR) under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed. Note The control plane does not support TLS 1.3 as the minimum TLS version; the Modern profile is not supported because it requires TLS 1.3 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the default APIServer CR to configure the TLS security profile: USD oc edit APIServer cluster Add the spec.tlsSecurityProfile field: Sample APIServer CR for a Custom profile apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the TLS security profile is set in the APIServer CR: USD oc describe apiserver cluster Example output Name: cluster Namespace: ... API Version: config.openshift.io/v1 Kind: APIServer ... Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... | [
"oc explain <component>.spec.tlsSecurityProfile.<profile> 1",
"oc explain apiserver.spec.tlsSecurityProfile.intermediate",
"KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2",
"oc explain <component>.spec.tlsSecurityProfile 1",
"oc explain ingresscontroller.spec.tlsSecurityProfile",
"KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string>",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"apiVersion: config.openshift.io/v1 kind: APIServer spec: tlsSecurityProfile: old: {} type: Old",
"oc edit APIServer cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe apiserver cluster",
"Name: cluster Namespace: API Version: config.openshift.io/v1 Kind: APIServer Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/security_and_compliance/tls-security-profiles |
5.8.4. Specifying Units | 5.8.4. Specifying Units To specify the unit for the LVM report display, use the --units argument of the report command. You can specify (b)ytes, (k)ilobytes, (m)egabytes, (g)igabytes, (t)erabytes, (e)xabytes, (p)etabytes, and (h)uman-readable. The default display is human-readable. You can override the default by setting the units parameter in the global section of the lvm.conf file. The following example specifies the output of the pvs command in megabytes rather than the default gigabytes. By default, units are displayed in powers of 2 (multiples of 1024). You can specify that units be displayed in multiples of 1000 by capitalizing the unit specification (B, K, M, G, T, H). The following command displays the output as a multiple of 1024, the default behavior. The following command displays the output as a multiple of 1000. You can also specify (s)ectors (defined as 512 bytes) or custom units. The following example displays the output of the pvs command as a number of sectors. The following example displays the output of the pvs command in units of 4 MB. | [
"pvs --units m PV VG Fmt Attr PSize PFree /dev/sda1 lvm2 -- 17555.40M 17555.40M /dev/sdb1 new_vg lvm2 a- 17552.00M 17552.00M /dev/sdc1 new_vg lvm2 a- 17552.00M 17500.00M /dev/sdd1 new_vg lvm2 a- 17552.00M 17552.00M",
"pvs PV VG Fmt Attr PSize PFree /dev/sdb1 new_vg lvm2 a- 17.14G 17.14G /dev/sdc1 new_vg lvm2 a- 17.14G 17.09G /dev/sdd1 new_vg lvm2 a- 17.14G 17.14G",
"pvs --units G PV VG Fmt Attr PSize PFree /dev/sdb1 new_vg lvm2 a- 18.40G 18.40G /dev/sdc1 new_vg lvm2 a- 18.40G 18.35G /dev/sdd1 new_vg lvm2 a- 18.40G 18.40G",
"pvs --units s PV VG Fmt Attr PSize PFree /dev/sdb1 new_vg lvm2 a- 35946496S 35946496S /dev/sdc1 new_vg lvm2 a- 35946496S 35840000S /dev/sdd1 new_vg lvm2 a- 35946496S 35946496S",
"pvs --units 4m PV VG Fmt Attr PSize PFree /dev/sdb1 new_vg lvm2 a- 4388.00U 4388.00U /dev/sdc1 new_vg lvm2 a- 4388.00U 4375.00U /dev/sdd1 new_vg lvm2 a- 4388.00U 4388.00U"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/report_units |
Chapter 6. Service Provider Interfaces (SPI) | Chapter 6. Service Provider Interfaces (SPI) Red Hat Single Sign-On is designed to cover most use-cases without requiring custom code, but we also want it to be customizable. To achieve this Red Hat Single Sign-On has a number of Service Provider Interfaces (SPI) for which you can implement your own providers. 6.1. Implementing an SPI To implement an SPI you need to implement its ProviderFactory and Provider interfaces. You also need to create a service configuration file. For example, to implement the Theme Selector SPI you need to implement ThemeSelectorProviderFactory and ThemeSelectorProvider and also provide the file META-INF/services/org.keycloak.theme.ThemeSelectorProviderFactory . Example ThemeSelectorProviderFactory: package org.acme.provider; import ... public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory { @Override public ThemeSelectorProvider create(KeycloakSession session) { return new MyThemeSelectorProvider(session); } @Override public void init(Config.Scope config) { } @Override public void postInit(KeycloakSessionFactory factory) { } @Override public void close() { } @Override public String getId() { return "myThemeSelector"; } } Note Keycloak creates a single instance of provider factories which makes it possible to store state for multiple requests. Provider instances are created by calling create on the factory for each request so these should be light-weight object. Example ThemeSelectorProvider: package org.acme.provider; import ... public class MyThemeSelectorProvider implements ThemeSelectorProvider { public MyThemeSelectorProvider(KeycloakSession session) { } @Override public String getThemeName(Theme.Type type) { return "my-theme"; } @Override public void close() { } } Example service configuration file ( META-INF/services/org.keycloak.theme.ThemeSelectorProviderFactory ): You can configure your provider through standalone.xml , standalone-ha.xml , or domain.xml . For example by adding the following to standalone.xml : <spi name="themeSelector"> <provider name="myThemeSelector" enabled="true"> <properties> <property name="theme" value="my-theme"/> </properties> </provider> </spi> Then you can retrieve the config in the ProviderFactory init method: public void init(Config.Scope config) { String themeName = config.get("theme"); } Your provider can also lookup other providers if needed. For example: public class MyThemeSelectorProvider implements ThemeSelectorProvider { private KeycloakSession session; public MyThemeSelectorProvider(KeycloakSession session) { this.session = session; } @Override public String getThemeName(Theme.Type type) { return session.getContext().getRealm().getLoginTheme(); } } 6.1.1. Show info from your SPI implementation in admin console Sometimes it is useful to show additional info about your Provider to a Red Hat Single Sign-On administrator. You can show provider build time information (eg. version of custom provider currently installed), current configuration of the provider (eg. url of remote system your provider talks to) or some operational info (average time of response from remote system your provider talks to). Red Hat Single Sign-On admin console provides Server Info page to show this kind of information. To show info from your provider it is enough to implement org.keycloak.provider.ServerInfoAwareProviderFactory interface in your ProviderFactory . Example implementation for MyThemeSelectorProviderFactory from example: package org.acme.provider; import ... public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory, ServerInfoAwareProviderFactory { ... @Override public Map<String, String> getOperationalInfo() { Map<String, String> ret = new LinkedHashMap<>(); ret.put("theme-name", "my-theme"); return ret; } } 6.2. Registering provider implementations There are two ways to register provider implementations. In most cases the simplest way is to use the Red Hat Single Sign-On deployer approach as this handles a number of dependencies automatically for you. It also supports hot deployment as well as re-deployment. The alternative approach is to deploy as a module. If you are creating a custom SPI you will need to deploy it as a module, otherwise we recommend using the Red Hat Single Sign-On deployer approach. 6.2.1. Using the Red Hat Single Sign-On Deployer If you copy your provider jar to the Red Hat Single Sign-On standalone/deployments/ directory, your provider will automatically be deployed. Hot deployment works too. Additionally, your provider jar works similarly to other components deployed in a JBoss EAP environment in that they can use facilities like the jboss-deployment-structure.xml file. This file allows you to set up dependencies on other components and load third-party jars and modules. Provider jars can also be contained within other deployable units like EARs and WARs. Deploying with a EAR actually makes it really easy to use third party jars as you can just put these libraries in the EAR's lib/ directory. 6.2.2. Register a provider using Modules To register a provider using Modules first create a module. To do this you can either use the jboss-cli script or manually create a folder inside KEYCLOAK_HOME/modules and add your jar and a module.xml . For example to add the event listener sysout example provider using the jboss-cli script execute: Or to manually create it start by creating the folder KEYCLOAK_HOME/modules/org/acme/provider/main . Then copy provider.jar to this folder and create module.xml with the following content: <?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.3" name="org.acme.provider"> <resources> <resource-root path="provider.jar"/> </resources> <dependencies> <module name="org.keycloak.keycloak-core"/> <module name="org.keycloak.keycloak-server-spi"/> </dependencies> </module> Once you've created the module you need to register this module with Red Hat Single Sign-On. This is done by editing the keycloak-server subsystem section of standalone.xml , standalone-ha.xml , or domain.xml , and adding it to the providers: <subsystem xmlns="urn:jboss:domain:keycloak-server:1.1"> <web-context>auth</web-context> <providers> <provider>module:org.keycloak.examples.event-sysout</provider> </providers> ... 6.2.3. Disabling a provider You can disable a provider by setting the enabled attribute for the provider to false in standalone.xml , standalone-ha.xml , or domain.xml . For example to disable the Infinispan user cache provider add: <spi name="userCache"> <provider name="infinispan" enabled="false"/> </spi> 6.3. Leveraging Java EE The service providers can be packaged within any Java EE component so long as you set up the META-INF/services file correctly to point to your providers. For example, if your provider needs to use third party libraries, you can package up your provider within an ear and store these third party libraries in the ear's lib/ directory. Also note that provider jars can make use of the jboss-deployment-structure.xml file that EJBs, WARS, and EARs can use in a JBoss EAP environment. See the JBoss EAP documentation for more details on this file. It allows you to pull in external dependencies among other fine grain actions. ProviderFactory implementations are required to be plain java objects. But, we also currently support implementing provider classes as Stateful EJBs. This is how you would do it: @Stateful @Local(EjbExampleUserStorageProvider.class) public class EjbExampleUserStorageProvider implements UserStorageProvider, UserLookupProvider, UserRegistrationProvider, UserQueryProvider, CredentialInputUpdater, CredentialInputValidator, OnUserCache { @PersistenceContext protected EntityManager em; protected ComponentModel model; protected KeycloakSession session; public void setModel(ComponentModel model) { this.model = model; } public void setSession(KeycloakSession session) { this.session = session; } @Remove @Override public void close() { } ... } You have to define the @Local annotation and specify your provider class there. If you don't do this, EJB will not proxy the provider instance correctly and your provider won't work. You must put the @Remove annotation on the close() method of your provider. If you don't, the stateful bean will never be cleaned up and you may eventually see error messages. Implementations of ProviderFactory are required to be plain java objects. Your factory class would perform a JNDI lookup of the Stateful EJB in its create() method. public class EjbExampleUserStorageProviderFactory implements UserStorageProviderFactory<EjbExampleUserStorageProvider> { @Override public EjbExampleUserStorageProvider create(KeycloakSession session, ComponentModel model) { try { InitialContext ctx = new InitialContext(); EjbExampleUserStorageProvider provider = (EjbExampleUserStorageProvider)ctx.lookup( "java:global/user-storage-jpa-example/" + EjbExampleUserStorageProvider.class.getSimpleName()); provider.setModel(model); provider.setSession(session); return provider; } catch (Exception e) { throw new RuntimeException(e); } } 6.4. JavaScript Providers Red Hat Single Sign-On has the ability to execute scripts during runtime in order to allow administrators to customize specific functionalities: Authenticator JavaScript Policy OpenID Connect Protocol Mapper 6.4.1. Authenticator Authentication scripts must provide at least one of the following functions: authenticate(..) , which is called from Authenticator#authenticate(AuthenticationFlowContext) action(..) , which is called from Authenticator#action(AuthenticationFlowContext) Custom Authenticator should at least provide the authenticate(..) function. You can use the javax.script.Bindings script within the code. script the ScriptModel to access script metadata realm the RealmModel user the current UserModel session the active KeycloakSession authenticationSession the current AuthenticationSessionModel httpRequest the current org.jboss.resteasy.spi.HttpRequest LOG a org.jboss.logging.Logger scoped to ScriptBasedAuthenticator Note You can extract additional context information from the context argument passed to the authenticate(context) action(context) function. AuthenticationFlowError = Java.type("org.keycloak.authentication.AuthenticationFlowError"); function authenticate(context) { LOG.info(script.name + " --> trace auth for: " + user.username); if ( user.username === "tester" && user.getAttribute("someAttribute") && user.getAttribute("someAttribute").contains("someValue")) { context.failure(AuthenticationFlowError.INVALID_USER); return; } context.success(); } 6.4.2. Create a JAR with the scripts to deploy Note JAR files are regular ZIP files with a .jar extension. In order to make your scripts available to Red Hat Single Sign-On you need to deploy them to the server. For that, you should create a JAR file with the following structure: The META-INF/keycloak-scripts.json is a file descriptor that provides metadata information about the scripts you want to deploy. It is a JSON file with the following structure: { "authenticators": [ { "name": "My Authenticator", "fileName": "my-script-authenticator.js", "description": "My Authenticator from a JS file" } ], "policies": [ { "name": "My Policy", "fileName": "my-script-policy.js", "description": "My Policy from a JS file" } ], "mappers": [ { "name": "My Mapper", "fileName": "my-script-mapper.js", "description": "My Mapper from a JS file" } ] } This file should reference the different types of script providers that you want to deploy: authenticators For OpenID Connect Script Authenticators. You can have one or multiple authenticators in the same JAR file policies For JavaScript Policies when using Red Hat Single Sign-On Authorization Services. You can have one or multiple policies in the same JAR file mappers For OpenID Connect Script Protocol Mappers. You can have one or multiple mappers in the same JAR file For each script file in your JAR file you must have a corresponding entry in META-INF/keycloak-scripts.json that maps your scripts files to a specific provider type. For that you should provide the following properties for each entry: name A friendly name that will be used to show the scripts through the Red Hat Single Sign-On Administration Console. If not provided, the name of the script file will be used instead description An optional text that better describes the intend of the script file fileName The name of the script file. This property is mandatory and should map to a file within the JAR. 6.4.3. Deploy the Script JAR Once you have a JAR file with a descriptor and the scripts you want to deploy, you just need to copy the JAR to the to the Red Hat Single Sign-On standalone/deployments/ directory. 6.4.4. Using Red Hat Single Sign-On Administration Console to upload scripts Note Ability to upload scripts through the admin console is deprecated and will be removed in a future version of Red Hat Single Sign-On Administrators cannot upload scripts to the server. This behavior prevents potential harm to the system in case malicious scripts are accidentally executed. Administrators should always deploy scripts directly to the server using a JAR file to prevent attacks when you run scripts at runtime. Ability to upload scripts can be explicitly enabled. This should be used with great care and plans should be created to deploy all scripts directly to the server as soon as possible. For more details about how to enable the upload_scripts feature. Please, take a look at the Profiles . 6.5. Available SPIs If you want to see list of all available SPIs at runtime, you can check Server Info page in admin console as described in Admin Console section. | [
"package org.acme.provider; import public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory { @Override public ThemeSelectorProvider create(KeycloakSession session) { return new MyThemeSelectorProvider(session); } @Override public void init(Config.Scope config) { } @Override public void postInit(KeycloakSessionFactory factory) { } @Override public void close() { } @Override public String getId() { return \"myThemeSelector\"; } }",
"package org.acme.provider; import public class MyThemeSelectorProvider implements ThemeSelectorProvider { public MyThemeSelectorProvider(KeycloakSession session) { } @Override public String getThemeName(Theme.Type type) { return \"my-theme\"; } @Override public void close() { } }",
"org.acme.provider.MyThemeSelectorProviderFactory",
"<spi name=\"themeSelector\"> <provider name=\"myThemeSelector\" enabled=\"true\"> <properties> <property name=\"theme\" value=\"my-theme\"/> </properties> </provider> </spi>",
"public void init(Config.Scope config) { String themeName = config.get(\"theme\"); }",
"public class MyThemeSelectorProvider implements ThemeSelectorProvider { private KeycloakSession session; public MyThemeSelectorProvider(KeycloakSession session) { this.session = session; } @Override public String getThemeName(Theme.Type type) { return session.getContext().getRealm().getLoginTheme(); } }",
"package org.acme.provider; import public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory, ServerInfoAwareProviderFactory { @Override public Map<String, String> getOperationalInfo() { Map<String, String> ret = new LinkedHashMap<>(); ret.put(\"theme-name\", \"my-theme\"); return ret; } }",
"KEYCLOAK_HOME/bin/jboss-cli.sh --command=\"module add --name=org.acme.provider --resources=target/provider.jar --dependencies=org.keycloak.keycloak-core,org.keycloak.keycloak-server-spi\"",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module xmlns=\"urn:jboss:module:1.3\" name=\"org.acme.provider\"> <resources> <resource-root path=\"provider.jar\"/> </resources> <dependencies> <module name=\"org.keycloak.keycloak-core\"/> <module name=\"org.keycloak.keycloak-server-spi\"/> </dependencies> </module>",
"<subsystem xmlns=\"urn:jboss:domain:keycloak-server:1.1\"> <web-context>auth</web-context> <providers> <provider>module:org.keycloak.examples.event-sysout</provider> </providers>",
"<spi name=\"userCache\"> <provider name=\"infinispan\" enabled=\"false\"/> </spi>",
"@Stateful @Local(EjbExampleUserStorageProvider.class) public class EjbExampleUserStorageProvider implements UserStorageProvider, UserLookupProvider, UserRegistrationProvider, UserQueryProvider, CredentialInputUpdater, CredentialInputValidator, OnUserCache { @PersistenceContext protected EntityManager em; protected ComponentModel model; protected KeycloakSession session; public void setModel(ComponentModel model) { this.model = model; } public void setSession(KeycloakSession session) { this.session = session; } @Remove @Override public void close() { } }",
"public class EjbExampleUserStorageProviderFactory implements UserStorageProviderFactory<EjbExampleUserStorageProvider> { @Override public EjbExampleUserStorageProvider create(KeycloakSession session, ComponentModel model) { try { InitialContext ctx = new InitialContext(); EjbExampleUserStorageProvider provider = (EjbExampleUserStorageProvider)ctx.lookup( \"java:global/user-storage-jpa-example/\" + EjbExampleUserStorageProvider.class.getSimpleName()); provider.setModel(model); provider.setSession(session); return provider; } catch (Exception e) { throw new RuntimeException(e); } }",
"AuthenticationFlowError = Java.type(\"org.keycloak.authentication.AuthenticationFlowError\"); function authenticate(context) { LOG.info(script.name + \" --> trace auth for: \" + user.username); if ( user.username === \"tester\" && user.getAttribute(\"someAttribute\") && user.getAttribute(\"someAttribute\").contains(\"someValue\")) { context.failure(AuthenticationFlowError.INVALID_USER); return; } context.success(); }",
"META-INF/keycloak-scripts.json my-script-authenticator.js my-script-policy.js my-script-mapper.js",
"{ \"authenticators\": [ { \"name\": \"My Authenticator\", \"fileName\": \"my-script-authenticator.js\", \"description\": \"My Authenticator from a JS file\" } ], \"policies\": [ { \"name\": \"My Policy\", \"fileName\": \"my-script-policy.js\", \"description\": \"My Policy from a JS file\" } ], \"mappers\": [ { \"name\": \"My Mapper\", \"fileName\": \"my-script-mapper.js\", \"description\": \"My Mapper from a JS file\" } ] }"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_developer_guide/providers |
Chapter 4. Configuring Red Hat High Availability Add-On With Conga | Chapter 4. Configuring Red Hat High Availability Add-On With Conga This chapter describes how to configure Red Hat High Availability Add-On software using Conga . For information on using Conga to manage a running cluster, see Chapter 5, Managing Red Hat High Availability Add-On With Conga . Note Conga is a graphical user interface that you can use to administer the Red Hat High Availability Add-On. Note, however, that in order to use this interface effectively you need to have a good and clear understanding of the underlying concepts. Learning about cluster configuration by exploring the available features in the user interface is not recommended, as it may result in a system that is not robust enough to keep all services running when components fail. This chapter consists of the following sections: Section 4.1, "Configuration Tasks" Section 4.2, "Starting luci " Section 4.3, "Controlling Access to luci" Section 4.4, "Creating a Cluster" Section 4.5, "Global Cluster Properties" Section 4.6, "Configuring Fence Devices" Section 4.7, "Configuring Fencing for Cluster Members" Section 4.8, "Configuring a Failover Domain" Section 4.9, "Configuring Global Cluster Resources" Section 4.10, "Adding a Cluster Service to the Cluster" 4.1. Configuration Tasks Configuring Red Hat High Availability Add-On software with Conga consists of the following steps: Configuring and running the Conga configuration user interface - the luci server. Refer to Section 4.2, "Starting luci " . Creating a cluster. Refer to Section 4.4, "Creating a Cluster" . Configuring global cluster properties. Refer to Section 4.5, "Global Cluster Properties" . Configuring fence devices. Refer to Section 4.6, "Configuring Fence Devices" . Configuring fencing for cluster members. Refer to Section 4.7, "Configuring Fencing for Cluster Members" . Creating failover domains. Refer to Section 4.8, "Configuring a Failover Domain" . Creating resources. Refer to Section 4.9, "Configuring Global Cluster Resources" . Creating cluster services. Refer to Section 4.10, "Adding a Cluster Service to the Cluster" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-config-conga-ca |
3.10. Creating a vNIC Profile | 3.10. Creating a vNIC Profile This Ruby example creates a vNIC profile. # Find the root of the tree of services: system_service = connection.system_service # Find the network where you want to add the profile. There may be multiple # networks with the same name (in different data centers, for example). # Therefore, you must look up a specific network by name, in a specific data center. dcs_service = system_service.data_centers_service dc = dcs_service.list(search: 'name=mydc').first networks = connection.follow_link(dc.networks) network = networks.detect { |n| n.name == 'mynetwork' } # Create the vNIC profile, with passthrough and port mirroring disabled: profiles_service = system_service.vnic_profiles_service profiles_service.add( OvirtSDK4::VnicProfile.new( name: 'myprofile', pass_through: { mode: OvirtSDK4::VnicPassThroughMode::DISABLED, }, port_mirroring: false, network: { id: network.id } ) ) For more information, see VnicProfilesService:add . | [
"Find the root of the tree of services: system_service = connection.system_service Find the network where you want to add the profile. There may be multiple networks with the same name (in different data centers, for example). Therefore, you must look up a specific network by name, in a specific data center. dcs_service = system_service.data_centers_service dc = dcs_service.list(search: 'name=mydc').first networks = connection.follow_link(dc.networks) network = networks.detect { |n| n.name == 'mynetwork' } Create the vNIC profile, with passthrough and port mirroring disabled: profiles_service = system_service.vnic_profiles_service profiles_service.add( OvirtSDK4::VnicProfile.new( name: 'myprofile', pass_through: { mode: OvirtSDK4::VnicPassThroughMode::DISABLED, }, port_mirroring: false, network: { id: network.id } ) )"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/creating_a_vnic_profile |
Chapter 2. Configuring an IBM Cloud account | Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud account. 2.1. Prerequisites You have an IBM Cloud account with a subscription. You cannot install OpenShift Container Platform on a free or trial IBM Cloud account. 2.2. Quotas and limits on IBM Cloud VPC The OpenShift Container Platform cluster uses a number of IBM Cloud VPC components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud account. For a comprehensive list of the default IBM Cloud VPC quotas and service limits, see IBM Cloud's documentation for Quotas and service limits . Virtual Private Cloud (VPC) Each OpenShift Container Platform cluster creates its own VPC. The default quota of VPCs per region is 10 and will allow 10 clusters. To have more than 10 clusters in a single region, you must increase this quota. Application load balancer By default, each cluster creates three application load balancers (ALBs): Internal load balancer for the master API server External load balancer for the master API server Load balancer for the router You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Cloud VPC. Floating IP address By default, the installation program distributes control plane and compute machines across all availability zones within a region to provision the cluster in a highly available configuration. In each availability zone, a public gateway is created and requires a separate floating IP address. The default quota for a floating IP address is 20 addresses per availability zone. The default cluster configuration yields three floating IP addresses: Two floating IP addresses in the us-east-1 primary zone. The IP address associated with the bootstrap node is removed after installation. One floating IP address in the us-east-2 secondary zone. One floating IP address in the us-east-3 secondary zone. IBM Cloud VPC can support up to 19 clusters per region in an account. If you plan to have more than 19 default clusters, you must increase this quota. Virtual Server Instances (VSI) By default, a cluster creates VSIs using bx2-4x16 profiles, which includes the following resources by default: 4 vCPUs 16 GB RAM The following nodes are created: One bx2-4x16 bootstrap machine, which is removed after the installation is complete Three bx2-4x16 control plane nodes Three bx2-4x16 compute nodes For more information, see IBM Cloud's documentation on supported profiles . Table 2.1. VSI component quotas and limits VSI component Default IBM Cloud VPC quota Default cluster configuration Maximum number of clusters vCPU 200 vCPUs per region 28 vCPUs, or 24 vCPUs after bootstrap removal 8 per region RAM 1600 GB per region 112 GB, or 96 GB after bootstrap removal 16 per region Storage 18 TB per region 1050 GB, or 900 GB after bootstrap removal 19 per region If you plan to exceed the resources stated in the table, you must increase your IBM Cloud account quota. Block Storage Volumes For each VPC machine, a block storage device is attached for its boot volume. The default cluster configuration creates seven VPC machines, resulting in seven block storage volumes. Additional Kubernetes persistent volume claims (PVCs) of the IBM Cloud VPC storage class create additional block storage volumes. The default quota of VPC block storage volumes are 300 per region. To have more than 300 volumes, you must increase this quota. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud DNS Services (DNS Services) 2.3.1. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note This offering does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud CLI . You have an existing domain and registrar. For more information, see the IBM documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard 1 1 At a minimum, a Standard plan is required for CIS to manage the cluster subdomain and its DNS records. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_name> 1 1 The instance cloud resource name. Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see the IBM Cloud documentation . 2.3.2. Using IBM Cloud DNS Services for DNS resolution The installation program uses IBM Cloud DNS Services to configure cluster DNS resolution and provide name lookup for a private cluster. You configure DNS resolution by creating a DNS services instance for the cluster, and then adding a DNS zone to the DNS Services instance. Ensure that the zone is authoritative for the domain. You can do this using a root domain or subdomain. Note IBM Cloud VPC does not support IPv6, so dual stack or IPv6 environments are not possible. Prerequisites You have installed the IBM Cloud CLI . You have an existing domain and registrar. For more information, see the IBM documentation . Procedure Create a DNS Services instance to use with your cluster: Install the DNS Services plugin by running the following command: USD ibmcloud plugin install cloud-dns-services Create the DNS Services instance by running the following command: USD ibmcloud dns instance-create <instance-name> standard-dns 1 1 At a minimum, a Standard plan is required for DNS Services to manage the cluster subdomain and its DNS records. Create a DNS zone for the DNS Services instance: Set the target operating DNS Services instance by running the following command: USD ibmcloud dns instance-target <instance-name> Add the DNS zone to the DNS Services instance by running the following command: USD ibmcloud dns zone-create <zone-name> 1 1 The fully qualified zone name. You can use either the root domain or subdomain value as the zone name, depending on which you plan to configure. A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Record the name of the DNS zone you have created. As part of the installation process, you must update the install-config.yaml file before deploying the cluster. Use the name of the DNS zone as the value for the baseDomain parameter. Note You do not have to manage permitted networks or configure an "A" DNS resource record. As required, the installation program configures these resources automatically. 2.4. IBM Cloud VPC IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud IAM overview, see the IBM Cloud documentation . 2.4.1. Required access policies You must assign the required access policies to your IBM Cloud account. Table 2.2. Required access policies Service type Service Access policy scope Platform access Service access Account management IAM Identity Service All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Service ID creator Account management [2] Identity and Access Management All resources Editor, Operator, Viewer, Administrator Account management Resource group only All resource groups in the account Administrator IAM services Cloud Object Storage All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager, Content Reader, Object Reader, Object Writer IAM services Internet Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services DNS Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services VPC Infrastructure Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager The policy access scope should be set based on how granular you want to assign access. The scope can be set to All resources or Resources based on selected attributes . Optional: This access policy is only required if you want the installation program to create a resource group. For more information about resource groups, see the IBM documentation . 2.4.2. Access policy assignment In IBM Cloud VPC IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.4.3. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud account. Prerequisites You have assigned the required access policies to your IBM Cloud account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud VPC API keys, see Understanding API keys . 2.5. Supported IBM Cloud VPC regions You can deploy an OpenShift Container Platform cluster to the following regions: au-syd (Sydney, Australia) br-sao (Sao Paulo, Brazil) ca-tor (Toronto, Canada) eu-de (Frankfurt, Germany) eu-gb (London, United Kingdom) jp-osa (Osaka, Japan) jp-tok (Tokyo, Japan) us-east (Washington DC, United States) us-south (Dallas, United States) 2.6. steps Configuring IAM for IBM Cloud VPC | [
"ibmcloud plugin install cis",
"ibmcloud cis instance-create <instance_name> standard 1",
"ibmcloud cis instance-set <instance_name> 1",
"ibmcloud cis domain-add <domain_name> 1",
"ibmcloud plugin install cloud-dns-services",
"ibmcloud dns instance-create <instance-name> standard-dns 1",
"ibmcloud dns instance-target <instance-name>",
"ibmcloud dns zone-create <zone-name> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_cloud_vpc/installing-ibm-cloud-account |
Chapter 4. Ceph Object Gateway and the Swift API | Chapter 4. Ceph Object Gateway and the Swift API As a developer, you can use a RESTful application programming interface (API) that is compatible with the Swift API data access model. You can manage the buckets and objects stored in Red Hat Ceph Storage cluster through the Ceph Object Gateway. The following table describes the support status for current Swift functional features: Table 4.1. Features Feature Status Remarks Authentication Supported Get Account Metadata Supported No custom metadata Swift ACLs Supported Supports a subset of Swift ACLs List Containers Supported List Container's Objects Supported Create Container Supported Delete Container Supported Get Container Metadata Supported Add/Update Container Metadata Supported Delete Container Metadata Supported Get Object Supported Create/Update an Object Supported Create Large Object Supported Delete Object Supported Copy Object Supported Get Object Metadata Supported Add/Update Object Metadata Supported Temp URL Operations Supported CORS Not Supported Expiring Objects Supported Object Versioning Not Supported Static Website Not Supported Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 4.1. Swift API limitations Important The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team. Maximum object size when using Swift API: 5GB Maximum metadata size when using Swift API: There is no defined limit on the total size of user metadata that can be applied to an object, but a single HTTP request is limited to 16,000 bytes. 4.2. Create a Swift user To test the Swift interface, create a Swift subuser. Creating a Swift user is a two-step process. The first step is to create the user. The second step is to create the secret key. Note In a multi-site deployment, always create a user on a host in the master zone of the master zone group. Prerequisites Installation of the Ceph Object Gateway. Root-level access to the Ceph Object Gateway node. Procedure Create the Swift user: Syntax Replace NAME with the Swift user name, for example: Example Create the secret key: Syntax Replace NAME with the Swift user name, for example: Example 4.3. Swift authenticating a user To authenticate a user, make a request containing an X-Auth-User and a X-Auth-Key in the header. Syntax Example Response Note You can retrieve data about Ceph's Swift-compatible service by executing GET requests using the X-Storage-Url value during authentication. Additional Resources See the Red Hat Ceph Storage Developer Guide for Swift request headers. See the Red Hat Ceph Storage Developer Guide for Swift response headers. 4.4. Swift container operations As a developer, you can perform container operations with the Swift application programming interface (API) through the Ceph Object Gateway. You can list, create, update, and delete containers. You can also add or update the container's metadata. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 4.4.1. Swift container operations A container is a mechanism for storing data objects. An account can have many containers, but container names must be unique. This API enables a client to create a container, set access controls and metadata, retrieve a container's contents, and delete a container. Since this API makes requests related to information in a particular user's account, all requests in this API must be authenticated unless a container's access control is deliberately made publicly accessible, that is, allows anonymous requests. Note The Amazon S3 API uses the term 'bucket' to describe a data container. When you hear someone refer to a 'bucket' within the Swift API, the term 'bucket' might be construed as the equivalent of the term 'container.' One facet of object storage is that it does not support hierarchical paths or directories. Instead, it supports one level consisting of one or more containers, where each container might have objects. The RADOS Gateway's Swift-compatible API supports the notion of 'pseudo-hierarchical containers', which is a means of using object naming to emulate a container, or directory hierarchy without actually implementing one in the storage system. You can name objects with pseudo-hierarchical names, for example, photos/buildings/empire-state.jpg, but container names cannot contain a forward slash ( / ) character. Important When uploading large objects to versioned Swift containers, use the --leave-segments option with the python-swiftclient utility. Not using --leave-segments overwrites the manifest file. Consequently, an existing object is overwritten, which leads to data loss. 4.4.2. Swift update a container's Access Control List (ACL) When a user creates a container, the user has read and write access to the container by default. To allow other users to read a container's contents or write to a container, you must specifically enable the user. You can also specify * in the X-Container-Read or X-Container-Write settings, which effectively enables all users to either read from or write to the container. Setting * makes the container public. That is it enables anonymous users to either read from or write to the container. Syntax Request Headers X-Container-Read Description The user IDs with read permissions for the container. Type Comma-separated string values of user IDs. Required No X-Container-Write Description The user IDs with write permissions for the container. Type Comma-separated string values of user IDs. Required No 4.4.3. Swift list containers A GET request that specifies the API version and the account will return a list of containers for a particular user account. Since the request returns a particular user's containers, the request requires an authentication token. The request cannot be made anonymously. Syntax Request Parameters limit Description Limits the number of results to the specified value. Type Integer Valid Values N/A Required Yes format Description Limits the number of results to the specified value. Type Integer Valid Values json or xml Required No marker Description Returns a list of results greater than the marker value. Type String Valid Values N/A Required No The response contains a list of containers, or returns with an HTTP 204 response code. Response Entities account Description A list for account information. Type Container container Description The list of containers. Type Container name Description The name of a container. Type String bytes Description The size of the container. Type Integer 4.4.4. Swift list a container's objects To list the objects within a container, make a GET request with the API version, account, and the name of the container. You can specify query parameters to filter the full list, or leave out the parameters to return a list of the first 10,000 object names stored in the container. Syntax Request Parameters format Description Limits the number of results to the specified value. Type Integer Valid Values json or xml Required No prefix Description Limits the result set to objects beginning with the specified prefix. Type String Valid Values N/A Required No marker Description Returns a list of results greater than the marker value. Type String Valid Values N/A Required No limit Description Limits the number of results to the specified value. Type Integer Valid Values 0 - 10,000 Required No delimiter Description The delimiter between the prefix and the rest of the object name. Type String Valid Values N/A Required No path Description The pseudo-hierarchical path of the objects. Type String Valid Values N/A Required No Response Entities container Description The container. Type Container object Description An object within the container. Type Container name Description The name of an object within the container. Type String hash Description A hash code of the object's contents. Type String last_modified Description The last time the object's contents were modified. Type Date content_type Description The type of content within the object. Type String 4.4.5. Swift create a container To create a new container, make a PUT request with the API version, account, and the name of the new container. The container name must be unique, must not contain a forward-slash (/) character, and should be less than 256 bytes. You can include access control headers and metadata headers in the request. You can also include a storage policy identifying a key for a set of placement pools. For example, execute radosgw-admin zone get to see a list of available keys under placement_pools . A storage policy enables you to specify a special set of pools for the container, for example, SSD-based storage. The operation is idempotent. If you make a request to create a container that already exists, it will return with a HTTP 202 return code, but will not create another container. Syntax Headers X-Container-Read Description The user IDs with read permissions for the container. Type Comma-separated string values of user IDs. Required No X-Container-Write Description The user IDs with write permissions for the container. Type Comma-separated string values of user IDs. Required No X-Container-Meta- KEY Description A user-defined metadata key that takes an arbitrary string value. Type String Required No X-Storage-Policy Description The key that identifies the storage policy under placement_pools for the Ceph Object Gateway. Execute radosgw-admin zone get for available keys. Type String Required No If a container with the same name already exists, and the user is the container owner then the operation will succeed. Otherwise, the operation will fail. HTTP Response 409 Status Code BucketAlreadyExists Description The container already exists under a different user's ownership. 4.4.6. Swift delete a container To delete a container, make a DELETE request with the API version, account, and the name of the container. The container must be empty. If you'd like to check if the container is empty, execute a HEAD request against the container. Once you've successfully removed the container, you'll be able to reuse the container name. Syntax HTTP Response 204 Status Code NoContent Description The container was removed. 4.4.7. Swift add or update the container metadata To add metadata to a container, make a POST request with the API version, account, and container name. You must have write permissions on the container to add or update metadata. Syntax Request Headers X-Container-Meta- KEY Description A user-defined metadata key that takes an arbitrary string value. Type String Required No 4.5. Swift object operations As a developer, you can perform object operations with the Swift application programming interface (API) through the Ceph Object Gateway. You can list, create, update, and delete objects. You can also add or update the object's metadata. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 4.5.1. Swift object operations An object is a container for storing data and metadata. A container might have many objects, but the object names must be unique. This API enables a client to create an object, set access controls and metadata, retrieve an object's data and metadata, and delete an object. Since this API makes requests related to information in a particular user's account, all requests in this API must be authenticated. Unless the container or object's access control is deliberately made publicly accessible, that is, allows anonymous requests. 4.5.2. Swift get an object To retrieve an object, make a GET request with the API version, account, container, and object name. You must have read permissions on the container to retrieve an object within it. Syntax Request Headers range Description To retrieve a subset of an object's contents, you can specify a byte range. Type Date Required No If-Modified-Since Description Only copies if modified since the date and time of the source object's last_modified attribute. Type Date Required No If-Unmodified-Since Description Only copies if not modified since the date and time of the source object's last_modified attribute. Type Date Required No Copy-If-Match Description Copies only if the ETag in the request matches the source object's ETag. Type ETag Required No Copy-If-None-Match Description Copies only if the ETag in the request does not match the source object's ETag. Type ETag Required No Response Headers Content-Range Description The range of the subset of object contents. Returned only if the range header field was specified in the request. 4.5.3. Swift create or update an object To create a new object, make a PUT request with the API version, account, container name, and the name of the new object. You must have write permission on the container to create or update an object. The object name must be unique within the container. The PUT request is not idempotent, so if you do not use a unique name, the request will update the object. However, you can use pseudo-hierarchical syntax in the object name to distinguish it from another object of the same name if it is under a different pseudo-hierarchical directory. You can include access control headers and metadata headers in the request. Syntax Request Headers ETag Description An MD5 hash of the object's contents. Recommended. Type String Valid Values N/A Required No Content-Type Description An MD5 hash of the object's contents. Type String Valid Values N/A Required No Transfer-Encoding Description Indicates whether the object is part of a larger aggregate object. Type String Valid Values chunked Required No 4.5.4. Swift delete an object To delete an object, make a DELETE request with the API version, account, container, and object name. You must have write permissions on the container to delete an object within it. Once you've successfully deleted the object, you will be able to reuse the object name. Syntax 4.5.5. Swift copy an object Copying an object allows you to make a server-side copy of an object, so that you do not have to download it and upload it under another container. To copy the contents of one object to another object, you can make either a PUT request or a COPY request with the API version, account, and the container name. For a PUT request, use the destination container and object name in the request, and the source container and object in the request header. For a Copy request, use the source container and object in the request, and the destination container and object in the request header. You must have write permission on the container to copy an object. The destination object name must be unique within the container. The request is not idempotent, so if you do not use a unique name, the request will update the destination object. You can use pseudo-hierarchical syntax in the object name to distinguish the destination object from the source object of the same name if it is under a different pseudo-hierarchical directory. You can include access control headers and metadata headers in the request. Syntax or alternatively: Syntax Request Headers X-Copy-From Description Used with a PUT request to define the source container/object path. Type String Required Yes, if using PUT . Destination Description Used with a COPY request to define the destination container/object path. Type String Required Yes, if using COPY . If-Modified-Since Description Only copies if modified since the date and time of the source object's last_modified attribute. Type Date Required No If-Unmodified-Since Description Only copies if not modified since the date and time of the source object's last_modified attribute. Type Date Required No Copy-If-Match Description Copies only if the ETag in the request matches the source object's ETag. Type ETag Required No Copy-If-None-Match Description Copies only if the ETag in the request does not match the source object's ETag. Type ETag Required No 4.5.6. Swift get object metadata To retrieve an object's metadata, make a HEAD request with the API version, account, container, and object name. You must have read permissions on the container to retrieve metadata from an object within the container. This request returns the same header information as the request for the object itself, but it does not return the object's data. Syntax 4.5.7. Swift add or update object metadata To add metadata to an object, make a POST request with the API version, account, container, and object name. You must have write permissions on the parent container to add or update metadata. Syntax Request Headers X-Object-Meta- KEY Description A user-defined meta data key that takes an arbitrary string value. Type String Required No 4.6. Swift temporary URL operations To allow temporary access, temp url functionality is supported by swift endpoint of radosgw . For example GET requests, to objects without the need to share credentials. For this functionality, initially the value of X-Account-Meta-Temp-URL-Key and optionally X-Account-Meta-Temp-URL-Key-2 should be set. The Temp URL functionality relies on a HMAC-SHA1 signature against these secret keys. 4.7. Swift get temporary URL objects Temporary URL uses a cryptographic HMAC-SHA1 signature, which includes the following elements: The value of the Request method, "GET" for instance The expiry time, in the format of seconds since the epoch, that is, Unix time The request path starting from "v1" onwards The above items are normalized with newlines appended between them, and a HMAC is generated using the SHA-1 hashing algorithm against one of the Temp URL Keys posted earlier. A sample python script to demonstrate the above is given below: Example Example Output 4.8. Swift POST temporary URL keys A POST request to the swift account with the required Key will set the secret temp URL key for the account against which temporary URL access can be provided to accounts. Up to two keys are supported, and signatures are checked against both the keys, if present, so that keys can be rotated without invalidating the temporary URLs. Syntax Request Headers X-Account-Meta-Temp-URL-Key Description A user-defined key that takes an arbitrary string value. Type String Required Yes X-Account-Meta-Temp-URL-Key-2 Description A user-defined key that takes an arbitrary string value. Type String Required No 4.9. Swift multi-tenancy container operations When a client application accesses containers, it always operates with credentials of a particular user. In Red Hat Ceph Storage cluster, every user belongs to a tenant. Consequently, every container operation has an implicit tenant in its context if no tenant is specified explicitly. Thus multi-tenancy is completely backward compatible with releases, as long as the referred containers and referring user belong to the same tenant. Extensions employed to specify an explicit tenant differ according to the protocol and authentication system used. A colon character separates tenant and container, thus a sample URL would be: Example By contrast, in a create_container() method, simply separate the tenant and container in the container method itself: Example | [
"radosgw-admin subuser create --uid= NAME --subuser= NAME :swift --access=full",
"radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"13TLtdEW7bCqgttQgPzxFxziu0AgabtOc6vM8DLA\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin key create --subuser= NAME :swift --key-type=swift --gen-secret",
"radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"a4ioT4jEP653CDcdU8p4OuhruwABBRZmyNUbnSSt\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"GET /auth HTTP/1.1 Host: swift.example.com X-Auth-User: johndoe X-Auth-Key: R7UUOLFDI2ZI9PRCQ53K",
"HTTP/1.1 204 No Content Date: Mon, 16 Jul 2012 11:05:33 GMT Server: swift X-Storage-Url: https://swift.example.com X-Storage-Token: UOlCCC8TahFKlWuv9DB09TWHF0nDjpPElha0kAa Content-Length: 0 Content-Type: text/plain; charset=UTF-8",
"POST / API_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN X-Container-Read: * X-Container-Write: UID1 , UID2 , UID3",
"GET / API_VERSION / ACCOUNT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"GET / API_VERSION / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"PUT / API_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN X-Container-Read: COMMA_SEPARATED_UIDS X-Container-Write: COMMA_SEPARATED_UIDS X-Container-Meta- KEY : VALUE X-Storage-Policy: PLACEMENT_POOLS_KEY",
"DELETE / API_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"POST / API_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN X-Container-Meta-Color: red X-Container-Meta-Taste: salty",
"GET / API_VERSION / ACCOUNT / TENANT : CONTAINER / OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"PUT / API_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"DELETE / API_VERSION / ACCOUNT / TENANT : CONTAINER / OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"PUT / API_VERSION / ACCOUNT / TENANT : CONTAINER HTTP/1.1 X-Copy-From: TENANT : SOURCE_CONTAINER / SOURCE_OBJECT Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"COPY / API_VERSION / ACCOUNT / TENANT : SOURCE_CONTAINER / SOURCE_OBJECT HTTP/1.1 Destination: TENANT : DEST_CONTAINER / DEST_OBJECT",
"HEAD / API_VERSION / ACCOUNT / TENANT : CONTAINER / OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"POST / API_VERSION / ACCOUNT / TENANT : CONTAINER / OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"import hmac from hashlib import sha1 from time import time method = 'GET' host = 'https://objectstore.example.com' duration_in_seconds = 300 # Duration for which the url is valid expires = int(time() + duration_in_seconds) path = '/v1/your-bucket/your-object' key = 'secret' hmac_body = '%s\\n%s\\n%s' % (method, expires, path) hmac_body = hmac.new(key, hmac_body, sha1).hexdigest() sig = hmac.new(key, hmac_body, sha1).hexdigest() rest_uri = \"{host}{path}?temp_url_sig={sig}&temp_url_expires={expires}\".format( host=host, path=path, sig=sig, expires=expires) print rest_uri",
"https://objectstore.example.com/v1/your-bucket/your-object?temp_url_sig=ff4657876227fc6025f04fcf1e82818266d022c6&temp_url_expires=1423200992",
"POST / API_VERSION / ACCOUNT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN",
"https://rgw.domain.com/tenant:container",
"create_container(\"tenant:container\")"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/developer_guide/ceph-object-gateway-and-the-swift-api |
Chapter 3. Debug symbols for Red Hat build of OpenJDK 17 | Chapter 3. Debug symbols for Red Hat build of OpenJDK 17 Debug symbols help in investigating a crash in Red Hat build of OpenJDK applications. 3.1. Installing the debug symbols This procedure describes how to install the debug symbols for Red Hat build of OpenJDK. Prerequisites Installed the gdb package on your local sytem. You can issue the sudo yum install gdb command on your CLI to install this package on your local system. Procedure To install the debug symbols, enter the following command: These commands install java-17-openjdk-debuginfo , java-17-openjdk-headless-debuginfo , and additional packages that provide debug symbols for Red Hat build of OpenJDK 17 binaries. These packages are not self-sufficient and do not contain executable binaries. Note The debuginfo-install is provided by the yum-utils package. To verify that the debug symbols are installed, enter the following command: 3.2. Checking the installation location of debug symbols This procedure explains how to find the location of debug symbols. Note If the debuginfo package is installed, but you cannot get the installation location of the package, then check if the correct package and java versions are installed. After confirming the versions, check the location of debug symbols again. Prerequisites Installed the gdb package on your local sytem. You can issue the sudo yum install gdb command on your CLI to install this package on your local system. Installed the debug symbols package. See Installing the debug symbols . Procedure To find the location of debug symbols, use gdb with which java commands: Use the following commands to explore the *-debug directory to see all the debug versions of the libraries, which include java , javac , and javah : Note The javac and javah tools are provided by the java-17-openjdk-devel package. You can install the package using the command: USD sudo debuginfo-install java-17-openjdk-devel . 3.3. Checking the configuration of debug symbols You can check and set configurations for debug symbols. Enter the following command to get a list of the installed packages: If some debug information packages have not been installed, enter the following command to install the missing packages: Run the following command if you want to hit a specific breakpoint: The above command completes the following tasks: Handles the SIGSEGV error as the JVM uses SEGV for stack overflow check. Sets pending breakpoints to yes . Calls the break statement in JavaCalls::call function. The function to starts the application in HotSpot (libjvm.so). 3.4. Configuring the debug symbols in a fatal error log file When a Java application is down due to a JVM crash, a fatal error log file is generated, for example: hs_error , java_error . These error log files are generated in current working directory of the application. The crash file contains information from the stack. Procedure You can remove all the debug symbols by using the strip -g command. The following code shows an example of non-stripped hs_error file: The following code shows an example of stripped hs_error file: Enter the following command to check that you have the same version of debug symbols and the fatal error log file: Note You can also use the sudo update-alternatives --config 'java' to complete this check. Use the nm command to ensure that libjvm.so has ELF data and text symbols: Additional resources The crash file hs_error is incomplete without the debug symbols installed. For more information, see Java application down due to JVM crash . | [
"sudo debuginfo-install java-17-openjdk sudo debuginfo-install java-17-openjdk-headless",
"gdb which java Reading symbols from /usr/bin/java...Reading symbols from /usr/lib/debug/usr/lib/jvm/java-17-openjdk-17.0.2.0.8-4.el8_5/bin/java-17.0.2.0.8-4.el8_5.x86_64.debug...done. (gdb)",
"gdb which java Reading symbols from /usr/bin/java...Reading symbols from /usr/lib/debug/usr/lib/jvm/java-17-openjdk-17.0.2.0.8-4.el8_5/bin/java-17.0.2.0.8-4.el8_5.x86_64.debug...done. (gdb)",
"cd /usr/lib/debug/lib/jvm/java-17-openjdk-17.0.2.0.8-4.el8_5",
"tree OJDK 17 version: └── java-17-openjdk-17.0.2.0.8-4.el8_5 ├── bin │ │ │── java-java-17.0.2.0.8-4.el8_5.x86_64.debug │ ├── javac-java-17.0.2.0.8-4.el8_5.x86_64.debug │ ├── javadoc-java-17.0.2.0.8-4.el8_5.x86_64.debug │ └── lib ├── jexec-java-17.0.2.0.8-4.el8_5.x86_64.debug ├── jli │ └── libjli.so-java-17.0.2.0.8-4.el8_5.x86_64.debug ├── jspawnhelper-java-17.0.2.0.8-4.el8_5.x86_64.debug │",
"sudo yum list installed | grep 'java-17-openjdk-debuginfo'",
"sudo yum debuginfo-install glibc-2.28-151.el8.x86_64 libgcc-8.4.1-1.el8.x86_64 libstdc++-8.4.1-1.el8.x86_64 sssd-client-2.4.0-9.el8.x86_64 zlib-1.2.11-17.el8.x86_64",
"gdb -ex 'handle SIGSEGV noprint nostop pass' -ex 'set breakpoint pending on' -ex 'break JavaCalls::call' -ex 'run' --args java ./HelloWorld",
"Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xb83d2a] Unsafe_SetLong+0xda j sun.misc.Unsafe.putLong(Ljava/lang/Object;JJ)V+0 j Crash.main([Ljava/lang/String;)V+8 v ~StubRoutines::call_stub V [libjvm.so+0x6c0e65] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0xc85 V [libjvm.so+0x73cc0d] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .constprop.1]+0x31d V [libjvm.so+0x73fd16] jni_CallStaticVoidMethod+0x186 C [libjli.so+0x48a2] JavaMain+0x472 C [libpthread.so.0+0x9432] start_thread+0xe2",
"Stack: [0x00007ff7e1a44000,0x00007ff7e1b44000], sp=0x00007ff7e1b42850, free space=1018k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xa7ecab] j sun.misc.Unsafe.putAddress(JJ)V+0 j Crash.crash()V+5 j Crash.main([Ljava/lang/String;)V+0 v ~StubRoutines::call_stub V [libjvm.so+0x67133a] V [libjvm.so+0x682bca] V [libjvm.so+0x6968b6] C [libjli.so+0x3989] C [libpthread.so.0+0x7dd5] start_thread+0xc5",
"java -version",
"nm /usr/lib/debug/usr/lib/jvm/java-17-openjdk-17.0.2.0.8-4.el8_5/lib/server/libjvm.so-17.0.2.0.8-4.el8_5.x86_64.debug"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/installing_and_using_red_hat_build_of_openjdk_17_on_rhel/installing-and-configuring-debug-symbols_openjdk |
Chapter 45. RemoteStorageManager schema reference | Chapter 45. RemoteStorageManager schema reference Used in: TieredStorageCustom Property Property type Description className string The class name for the RemoteStorageManager implementation. classPath string The class path for the RemoteStorageManager implementation. config map The additional configuration map for the RemoteStorageManager implementation. Keys will be automatically prefixed with rsm.config. , and added to Kafka broker configuration. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-remotestoragemanager-reference |
Chapter 25. Random functions Tapset | Chapter 25. Random functions Tapset These functions deal with random number generation. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/random-dot-stp |
Chapter 8. Preparing for users | Chapter 8. Preparing for users After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including taking steps to prepare for users. 8.1. Understanding identity provider configuration The OpenShift Container Platform control plane includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 8.1.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 8.1.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . After you define an identity provider, you can use RBAC to define and apply permissions . 8.1.3. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. generate Provisions a user with the identity's preferred user name. If a user with the preferred user name is already mapped to an existing identity, a unique user name is generated. For example, myuser2 . This method should not be used in combination with external processes that require exact matches between OpenShift Container Platform user names and identity provider user names, such as LDAP group sync. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 8.1.4. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . 8.2. Using RBAC to define and apply permissions Understand and apply role-based access control. 8.2.1. RBAC overview Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action. Authorization is managed using: Authorization object Description Rules Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods. Roles Collections of rules. You can associate, or bind, users and groups to multiple roles. Bindings Associations between users and/or groups with a role. There are two levels of RBAC roles and bindings that control authorization: RBAC level Description Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation. This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles. During evaluation, both the cluster role bindings and the local role bindings are used. For example: Cluster-wide "allow" rules are checked. Locally-bound "allow" rules are checked. Deny by default. 8.2.1.1. Default cluster roles OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally. Important It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly. Default cluster role Description admin A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota. basic-user A user that can get basic information about projects and users. cluster-admin A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. cluster-status A user that can get basic cluster status information. cluster-reader A user that can get or view most of the objects but cannot modify them. edit A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. self-provisioner A user that can create their own projects. view A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin , plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin . The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below. Warning The get pods/exec , get pods/* , and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges . 8.2.1.2. Evaluating authorization OpenShift Container Platform evaluates authorization by using: Identity The user name and list of groups that the user belongs to. Action The action you perform. In most cases, this consists of: Project : The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities. Verb : The action itself: get , list , create , update , delete , deletecollection , or watch . Resource name : The API endpoint that you access. Bindings The full list of bindings, the associations between users or groups with a role. OpenShift Container Platform evaluates authorization by using the following steps: The identity and the project-scoped action is used to find all bindings that apply to the user or their groups. Bindings are used to locate all the roles that apply. Roles are used to find all the rules that apply. The action is checked against each rule to find a match. If no matching rule is found, the action is then denied by default. Tip Remember that users and groups can be associated with, or bound to, multiple roles at the same time. Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with. Important The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin . Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level. 8.2.1.2.1. Cluster role aggregation The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation , where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources. 8.2.2. Projects and namespaces A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces. Namespaces provide a unique scope for: Named resources to avoid basic naming collisions. Delegated management authority to trusted users. The ability to limit community resource consumption. Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users. A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects. Projects can have a separate name , displayName , and description . The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. The optional displayName is how the project is displayed in the web console (defaults to name ). The optional description can be a more detailed description of the project and is also visible in the web console. Each project scopes its own set of: Object Description Objects Pods, services, replication controllers, etc. Policies Rules for which users can or cannot perform actions on objects. Constraints Quotas for each kind of object that can be limited. Service accounts Service accounts act automatically with designated access to objects in the project. Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects. Developers and administrators can interact with projects by using the CLI or the web console. 8.2.3. Default projects OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. 8.2.4. Viewing cluster roles and bindings You can use the oc CLI to view cluster roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the cluster roles and bindings. Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings. Procedure To view the cluster roles and their associated rule sets: USD oc describe clusterrole.rbac Example output Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ... To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles: USD oc describe clusterrolebinding.rbac Example output Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ... 8.2.5. Viewing local roles and bindings You can use the oc CLI to view local roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the local roles and bindings: Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. Users with the admin default cluster role bound locally can view and manage roles and bindings in that project. Procedure To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project: USD oc describe rolebinding.rbac To view the local role bindings for a different project, add the -n flag to the command: USD oc describe rolebinding.rbac -n joe-project Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project 8.2.6. Adding roles to users You can use the oc adm administrator CLI to manage the roles and bindings. Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands. You can bind any of the default cluster roles to local users or groups in your project. Procedure Add a role to a user in a specific project: USD oc adm policy add-role-to-user <role> <user> -n <project> For example, you can add the admin role to the alice user in joe project by running: USD oc adm policy add-role-to-user admin alice -n joe Tip You can alternatively apply the following YAML to add the role to the user: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice View the local role bindings and verify the addition in the output: USD oc describe rolebinding.rbac -n <project> For example, to view the local role bindings for the joe project: USD oc describe rolebinding.rbac -n joe Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe 1 The alice user has been added to the admins RoleBinding . 8.2.7. Creating a local role You can create a local role for a project and then bind it to a user. Procedure To create a local role for a project, run the following command: USD oc create role <name> --verb=<verb> --resource=<resource> -n <project> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to <project> , the project name For example, to create a local role that allows a user to view pods in the blue project, run the following command: USD oc create role podview --verb=get --resource=pod -n blue To bind the new role to a user, run the following command: USD oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue 8.2.8. Creating a cluster role You can create a cluster role. Procedure To create a cluster role, run the following command: USD oc create clusterrole <name> --verb=<verb> --resource=<resource> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to For example, to create a cluster role that allows a user to view pods, run the following command: USD oc create clusterrole podviewonly --verb=get --resource=pod 8.2.9. Local role binding commands When you manage a user or group's associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used. You can use the following commands for local RBAC management. Table 8.1. Local role binding operations Command Description USD oc adm policy who-can <verb> <resource> Indicates which users can perform an action on a resource. USD oc adm policy add-role-to-user <role> <username> Binds a specified role to specified users in the current project. USD oc adm policy remove-role-from-user <role> <username> Removes a given role from specified users in the current project. USD oc adm policy remove-user <username> Removes specified users and all of their roles in the current project. USD oc adm policy add-role-to-group <role> <groupname> Binds a given role to specified groups in the current project. USD oc adm policy remove-role-from-group <role> <groupname> Removes a given role from specified groups in the current project. USD oc adm policy remove-group <groupname> Removes specified groups and all of their roles in the current project. 8.2.10. Cluster role binding commands You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources. Table 8.2. Cluster role binding operations Command Description USD oc adm policy add-cluster-role-to-user <role> <username> Binds a given role to specified users for all projects in the cluster. USD oc adm policy remove-cluster-role-from-user <role> <username> Removes a given role from specified users for all projects in the cluster. USD oc adm policy add-cluster-role-to-group <role> <groupname> Binds a given role to specified groups for all projects in the cluster. USD oc adm policy remove-cluster-role-from-group <role> <groupname> Removes a given role from specified groups for all projects in the cluster. 8.2.11. Creating a cluster admin The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources. Prerequisites You must have created a user to define as the cluster admin. Procedure Define the user as a cluster admin: USD oc adm policy add-cluster-role-to-user cluster-admin <user> 8.3. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 8.3.1. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 8.4. Image configuration Understand and configure image registry settings. 8.4.1. Image controller configuration parameters The image.config.openshift.io/cluster resource holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . Its spec offers the following configuration parameters. Note Parameters such as DisableScheduledImport , MaxImagesBulkImportedPerRepository , MaxScheduledImportsPerMinute , ScheduledImageImportMinimumIntervalSeconds , InternalRegistryHostname are not configurable. Parameter Description allowedRegistriesForImport Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. Every element of this list contains a location of the registry specified by the registry domain name. domainName : Specifies a domain name for the registry. If the registry uses a non-standard 80 or 443 port, the port should be included in the domain name as well. insecure : Insecure indicates whether the registry is secure or insecure. By default, if not otherwise specified, the registry is assumed to be secure. additionalTrustedCA A reference to a config map containing additional CAs that should be trusted during image stream import , pod image pull , openshift-image-registry pullthrough , and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust. externalRegistryHostnames Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The value must be in hostname[:port] format. registrySources Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry. insecureRegistries : Registries which do not have a valid TLS certificate or only support HTTP connections. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . blockedRegistries : Registries for which image pull and push actions are denied. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are allowed. allowedRegistries : Registries for which image pull and push actions are allowed. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are blocked. containerRuntimeSearchRegistries : Registries for which image pull and push actions are allowed using image short names. All other registries are blocked. Either blockedRegistries or allowedRegistries can be set, but not both. Warning When the allowedRegistries parameter is defined, all registries, including registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. When using the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The status field of the image.config.openshift.io/cluster resource holds observed values from the cluster. Parameter Description internalRegistryHostname Set by the Image Registry Operator, which controls the internalRegistryHostname . It sets the hostname for the default internal image registry. The value must be in hostname[:port] format. For backward compatibility, you can still use the OPENSHIFT_DEFAULT_REGISTRY environment variable, but this setting overrides the environment variable. externalRegistryHostnames Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The values must be in hostname[:port] format. 8.4.2. Configuring image registry settings You can configure image registry settings by editing the image.config.openshift.io/cluster custom resource (CR). The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster CR for any changes to the registries and reboots the nodes when it detects changes. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Image : Holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . 2 allowedRegistriesForImport : Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. 3 additionalTrustedCA : A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust. 4 registrySources : Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the allowedRegistries parameter, which defines the registries that are allowed to be used. The insecure registry insecure.com is also allowed. The registrySources paramter does not contain configuration for the internal cluster registry. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. Do not add the registry.redhat.io and quay.io registries to the blockedRegistries list. When using the allowedRegistries , blockedRegistries , or insecureRegistries parameter, you can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . Insecure external registries should be avoided to reduce possible security risks. To check that the changes are applied, list your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-j5cd0qt-f76d1-vfj5x-master-0 Ready master 98m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-master-1 Ready,SchedulingDisabled master 99m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-master-2 Ready master 98m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4 Ready worker 90m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz NotReady,SchedulingDisabled worker 90m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv Ready worker 90m v1.22.1 For more information on the allowed, blocked, and insecure registry parameters, see Configuring image registry settings . 8.4.2.1. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 8.4.2.2. Configuring image registry repository mirroring Setting up container registry repository mirroring enables you to do the following: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. The attributes of repository mirroring in OpenShift Container Platform include: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment. After OpenShift Container Platform installation: Even if you don't configure mirroring during OpenShift Container Platform installation, you can do so later using the ImageContentSourcePolicy object. The following procedure provides a post-installation mirror configuration, where you create an ImageContentSourcePolicy object that identifies: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. Note You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy object. You cannot add a pull secret to a project. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source directory to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy \ docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi8/ubi-minimal image from registry.access.redhat.com . After you create the registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Log in to your OpenShift Container Platform cluster. Create an ImageContentSourcePolicy file (for example, registryrepomirror.yaml ), replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8 1 Indicates the name of the image registry and repository. 2 Indicates multiple mirror repositories for each target repository. If one mirror is down, the target repository can use another mirror. 3 Indicates the registry and repository containing the content that is mirrored. 4 You can configure a namespace inside a registry to use any image in that namespace. If you use a registry domain as a source, the ImageContentSourcePolicy resource is applied to all repositories from the registry. 5 If you configure the registry name, the ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry. 6 Pulls the image mirror.example.net/image@sha256:... . 7 Pulls the image myimage in the source registry namespace from the mirror mirror.example.net/myimage@sha256:... . 8 Pulls the image registry.example.com/example/myimage from the mirror registry mirror.example.net/registry-example-com/example/myimage@sha256:... . The ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry mirror.example.net/registry-example-com . Create the new ImageContentSourcePolicy object: USD oc create -f registryrepomirror.yaml After the ImageContentSourcePolicy object is created, the new settings are deployed to each node and the cluster starts using the mirrored repository for requests to the source repository. To check that the mirrored configuration settings, are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0 The Imagecontentsourcepolicy resource does not restart the nodes. Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi8/ubi-minimal" mirror-by-digest-only = true [[registry.mirror]] location = "example.io/example/ubi-minimal" [[registry.mirror]] location = "example.com/example/ubi-minimal" [[registry]] prefix = "" location = "registry.example.com" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net/registry-example-com" [[registry]] prefix = "" location = "registry.example.com/example" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net" [[registry]] prefix = "" location = "registry.example.com/example/myimage" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net/image" [[registry]] prefix = "" location = "registry.redhat.io" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.com" [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.com/redhat" Pull an image digest to the node from the source and check if it is resolved by the mirror. ImageContentSourcePolicy objects support image digests only, not image tags. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. 8.5. Populating OperatorHub from mirrored Operator catalogs If you mirrored Operator catalogs for use with disconnected clusters, you can populate OperatorHub with the Operators from your mirrored catalogs. You can use the generated manifests from the mirroring process to create the required ImageContentSourcePolicy and CatalogSource objects. 8.5.1. Prerequisites Mirroring Operator catalogs for use with disconnected clusters 8.5.2. Creating the ImageContentSourcePolicy object After mirroring Operator catalog content to your mirror registry, create the required ImageContentSourcePolicy (ICSP) object. The ICSP object configures nodes to translate between the image references stored in Operator manifests and the mirrored registry. Procedure On a host with access to the disconnected cluster, create the ICSP by running the following command to specify the imageContentSourcePolicy.yaml file in your manifests directory: USD oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml where <path/to/manifests/dir> is the path to the manifests directory for your mirrored content. You can now create a CatalogSource object to reference your mirrored index image and Operator content. 8.5.3. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Prerequisites An index image built and pushed to a registry. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.9 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify your index image. 4 Specify your name or an organization name publishing the catalog. 5 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. If your index image is hosted on a private registry and requires authentication, see Accessing images for Operators from private registries . If you want your catalogs to be able to automatically update their index image version after cluster upgrades by using Kubernetes version-based image tags, see Image template for custom catalog sources . 8.6. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a cluster administrator, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces... to make the Operator available to all users and projects. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. 8.6.1. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type jaeger to find the Jaeger Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Select one of the following: All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 8.6.2. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces , then the openshift-operators namespace already has an appropriate Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources About OperatorGroups | [
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc describe clusterrole.rbac",
"Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]",
"oc describe clusterrolebinding.rbac",
"Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api",
"oc describe rolebinding.rbac",
"oc describe rolebinding.rbac -n joe-project",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project",
"oc adm policy add-role-to-user <role> <user> -n <project>",
"oc adm policy add-role-to-user admin alice -n joe",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc describe rolebinding.rbac -n <project>",
"oc describe rolebinding.rbac -n joe",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe",
"oc create role <name> --verb=<verb> --resource=<resource> -n <project>",
"oc create role podview --verb=get --resource=pod -n blue",
"oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue",
"oc create clusterrole <name> --verb=<verb> --resource=<resource>",
"oc create clusterrole podviewonly --verb=get --resource=pod",
"oc adm policy add-cluster-role-to-user cluster-admin <user>",
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>",
"oc delete secrets kubeadmin -n kube-system",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-j5cd0qt-f76d1-vfj5x-master-0 Ready master 98m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-master-1 Ready,SchedulingDisabled master 99m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-master-2 Ready master 98m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4 Ready worker 90m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz NotReady,SchedulingDisabled worker 90m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv Ready worker 90m v1.22.1",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi8/ubi-minimal\" mirror-by-digest-only = true [[registry.mirror]] location = \"example.io/example/ubi-minimal\" [[registry.mirror]] location = \"example.com/example/ubi-minimal\" [[registry]] prefix = \"\" location = \"registry.example.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/image\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com/redhat\"",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6",
"oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.9 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/post-installation_configuration/post-install-preparing-for-users |
Integrate with Identity Service | Integrate with Identity Service Red Hat OpenStack Platform 16.0 Use Active Directory or Red Hat Identity Management as an external authentication back end OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/integrate_with_identity_service/index |
15.4. Virtual Machine Graphical Console | 15.4. Virtual Machine Graphical Console This window displays a guest's graphical console. Guests can use several different protocols to export their graphical framebuffers: virt-manager supports VNC and SPICE . If your virtual machine is set to require authentication, the Virtual Machine graphical console prompts you for a password before the display appears. Figure 15.7. Graphical console window Note VNC is considered insecure by many security experts, however, several changes have been made to enable the secure usage of VNC for virtualization on Red Hat enterprise Linux. The guest machines only listen to the local host's loopback address ( 127.0.0.1 ). This ensures only those with shell privileges on the host can access virt-manager and the virtual machine through VNC. Although virt-manager is configured to listen to other public network interfaces and alternative methods can be configured, it is not recommended. Remote administration can be performed by tunneling over SSH which encrypts the traffic. Although VNC can be configured to access remotely without tunneling over SSH, for security reasons, it is not recommended. To remotely administer the guest follow the instructions in: Chapter 5, Remote Management of Guests . TLS can provide enterprise level security for managing guest and host systems. Your local desktop can intercept key combinations (for example, Ctrl+Alt+F1) to prevent them from being sent to the guest machine. You can use the Send key menu option to send these sequences. From the guest machine window, click the Send key menu and select the key sequence to send. In addition, from this menu you can also capture the screen output. SPICE is an alternative to VNC available for Red Hat Enterprise Linux. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-managing_guests_with_the_virtual_machine_manager_virt_manager-virtual_machine_graphical_console_ |
Chapter 5. Deprecated functionality | Chapter 5. Deprecated functionality This section provides an overview of functionality that has been deprecated in all minor releases up to this release of Red Hat Ceph Storage. Important Deprecated functionality continues to be supported until the end of life of Red Hat Ceph Storage 5. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. NFS support for CephFS is now deprecated NFS support for CephFS is now deprecated in favor of upcoming NFS availability in OpenShift Data Foundation. Red Hat Ceph Storage support for NFS in OpenStack Manila is not affected. Deprecated functionality will receive only bug fixes for the lifetime of the current release, and may be removed in future releases. Relevant documentation around this technology is identified as "Limited Availability". iSCSi support is now deprecated iSCSi support is now deprecated in favor of future NVMEoF support. Deprecated functionality will receive only bug fixes for the lifetime of the current release, and may be removed in future releases. Relevant documentation around this technology is identified as "Limited Availability". Ceph configuration file is now deprecated The Ceph configuration file ( ceph.conf ) is now deprecated in favor of new centralized configuration stored in Ceph Monitors. For details, see the The Ceph configuration database section in the Red Hat Ceph Storage Configuration Guide . The min_compat_client parameter for Ceph File System (CephFS) is now deprecated The min_compat_client parameter is deprecated for Red Hat Ceph Storage 5.0 and new client features are added for setting-up the Ceph File Systems (CephFS). For details, see the Client features section in the Red Hat Ceph Storage File System Guide . The snapshot of Ceph File System subvolume group is now deprecated The snapshot feature of Ceph File System (CephFS) subvolume group is deprecated for Red Hat Ceph Storage 5.0. The existing snapshots can be listed and deleted, whenever needed. For details, see the Listing snapshots of a file system subvolume group and Removing snapshots of a file system subvolume group sections in the Red Hat Ceph Storage Ceph File System guide . The Cockpit Ceph Installer is now deprecated Installing a Red Hat Ceph Storage cluster 5 using Cockpit Ceph Installer is not supported. Use Cephadm to install a Red Hat Ceph Storage cluster. For details, see the Red Hat Ceph Storage Installation guide . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.1_release_notes/deprecated-functionality |
Chapter 5. RoleBinding [authorization.openshift.io/v1] | Chapter 5. RoleBinding [authorization.openshift.io/v1] Description RoleBinding references a Role, but not contain it. It can reference any Role in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required subjects roleRef 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources groupNames array (string) GroupNames holds all the groups directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata roleRef ObjectReference RoleRef can only reference the current namespace and the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error. Since Policy is a singleton, this is sufficient knowledge to locate a role. subjects array (ObjectReference) Subjects hold object references to authorize with this rule. This field is ignored if UserNames or GroupNames are specified to support legacy clients and servers. Thus newer clients that do not need to support backwards compatibility should send only fully qualified Subjects and should omit the UserNames and GroupNames fields. Clients that need to support backwards compatibility can use this field to build the UserNames and GroupNames. userNames array (string) UserNames holds all the usernames directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. 5.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/rolebindings GET : list objects of kind RoleBinding /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings GET : list objects of kind RoleBinding POST : create a RoleBinding /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings/{name} DELETE : delete a RoleBinding GET : read the specified RoleBinding PATCH : partially update the specified RoleBinding PUT : replace the specified RoleBinding 5.2.1. /apis/authorization.openshift.io/v1/rolebindings HTTP method GET Description list objects of kind RoleBinding Table 5.1. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty 5.2.2. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings HTTP method GET Description list objects of kind RoleBinding Table 5.2. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBinding Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body RoleBinding schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 202 - Accepted RoleBinding schema 401 - Unauthorized Empty 5.2.3. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the RoleBinding HTTP method DELETE Description delete a RoleBinding Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status_v3 schema 202 - Accepted Status_v3 schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBinding Table 5.9. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBinding Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBinding Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body RoleBinding schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/role_apis/rolebinding-authorization-openshift-io-v1 |
Chapter 11. Changing the MTU for the cluster network | Chapter 11. Changing the MTU for the cluster network As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change. You can change the MTU only for clusters using the OVN-Kubernetes or OpenShift SDN network plugins. 11.1. About the cluster MTU During installation the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You do not normally need to override the detected MTU. You might want to change the MTU of the cluster network for several reasons: The MTU detected during cluster installation is not correct for your infrastructure Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance You can change the cluster MTU for only the OVN-Kubernetes and OpenShift SDN cluster network plugins. 11.1.1. Service interruption considerations When you initiate an MTU change on your cluster the following effects might impact service availability: At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart. Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change. 11.1.2. MTU value selection When planning your MTU migration there are two related but distinct MTU values to consider. Hardware MTU : This MTU value is set based on the specifics of your network infrastructure. Cluster network MTU : This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin: OVN-Kubernetes : 100 bytes OpenShift SDN : 50 bytes If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . Important To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value ( maxmtu ) that is accepted by the network interface by using the ip -d link command. 11.1.3. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 11.1. Live migration of the cluster MTU User-initiated steps OpenShift Container Platform activity Set the following values in the Cluster Network Operator configuration: spec.migration.mtu.machine.to spec.migration.mtu.network.from spec.migration.mtu.network.to Cluster Network Operator (CNO) : Confirms that each field is set to a valid value. The mtu.machine.to must be set to either the new hardware MTU or to the current hardware MTU if the MTU for the hardware is not changing. This value is transient and is used as part of the migration process. Separately, if you specify a hardware MTU that is different from your existing hardware MTU value, you must manually configure the MTU to persist by other means, such as with a machine config, DHCP setting, or a Linux kernel command line. The mtu.network.from field must equal the network.status.clusterNetworkMTU field, which is the current MTU of the cluster network. The mtu.network.to field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the network plugin. For OVN-Kubernetes, the overhead is 100 bytes and for OpenShift SDN the overhead is 50 bytes. If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the mtu.network.to field. Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster. Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use a variety of methods to accomplish this, including: Deploying a new NetworkManager connection profile with the MTU change Changing the MTU through a DHCP server setting Changing the MTU through boot parameters N/A Set the mtu value in the CNO configuration for the network plugin and set spec.migration to null . Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster with the new MTU configuration. 11.2. Changing the cluster MTU As a cluster administrator, you can change the maximum transmission unit (MTU) for your cluster. The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update rolls out. The following procedure describes how to change the cluster MTU by using either machine configs, DHCP, or an ISO. If you use the DHCP or ISO approach, you must refer to configuration artifacts that you kept after installing your cluster to complete the procedure. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You identified the target MTU for your cluster. The correct MTU varies depending on the network plugin that your cluster uses: OVN-Kubernetes : The cluster MTU must be set to 100 less than the lowest hardware MTU value in your cluster. OpenShift SDN : The cluster MTU must be set to 50 less than the lowest hardware MTU value in your cluster. Procedure To increase or decrease the MTU for the cluster network complete the following procedure. To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OpenShiftSDN Service Network: 10.217.4.0/23 ... Prepare your configuration for the hardware MTU: If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration: dhcp-option-force=26,<mtu> where: <mtu> Specifies the hardware MTU for the DHCP server to advertise. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for OpenShift Container Platform if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified. Find the primary network interface: If you are using the OpenShift SDN network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }' where: <node_name> Specifies the name of a node in your cluster. If you are using the OVN-Kubernetes network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 where: <node_name> Specifies the name of a node in your cluster. Create the following NetworkManager configuration in the <interface>-mtu.conf file: Example NetworkManager connection configuration [connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu> where: <mtu> Specifies the new hardware MTU value. <interface> Specifies the primary network interface name. Create two MachineConfig objects, one for the control plane nodes and another for the worker nodes in your cluster: Create the following Butane config in the control-plane-interface.bu file: variant: openshift version: 4.13.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create the following Butane config in the worker-interface.bu file: variant: openshift version: 4.13.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create MachineConfig objects from the Butane configs by running the following command: USD for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value for <machine_to> and for OVN-Kubernetes must be 100 less and for OpenShift SDN must be 50 less. <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that increases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh Update the underlying network interface MTU value: If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster. USD for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure. As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep path: where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. If the machine config is successfully deployed, the output contains the /etc/NetworkManager/conf.d/99-<interface>-mtu.conf file path and the ExecStart=/usr/local/bin/mtu-migration.sh line. To finalize the MTU migration, enter one of the following commands: If you are using the OVN-Kubernetes network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' where: <mtu> Specifies the new cluster network MTU that you specified with <overlay_to> . If you are using the OpenShift SDN network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}' where: <mtu> Specifies the new cluster network MTU that you specified with <overlay_to> . After finalizing the MTU migration, each MCP node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification You can verify that a node in your cluster uses an MTU that you specified in the procedure. To get the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Get the current MTU for the primary network interface of a node. To list the nodes in your cluster, enter the following command: USD oc get nodes To obtain the current MTU setting for the primary network interface on a node, enter the following command: USD oc debug node/<node> -- chroot /host ip address show <interface> where: <node> Specifies a node from the output from the step. <interface> Specifies the primary network interface name for the node. Example output ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051 11.3. Additional resources Using advanced networking options for PXE and ISO installations Manually creating NetworkManager profiles in key file format Configuring a dynamic Ethernet connection using nmcli | [
"oc describe network.config cluster",
"Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OpenShiftSDN Service Network: 10.217.4.0/23",
"dhcp-option-force=26,<mtu>",
"oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }'",
"oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0",
"[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>",
"variant: openshift version: 4.13.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"variant: openshift version: 4.13.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/mtu-migration.sh",
"for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep path:",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'",
"oc get mcp",
"oc describe network.config cluster",
"oc get nodes",
"oc debug node/<node> -- chroot /host ip address show <interface>",
"ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/changing-cluster-network-mtu |
Chapter 2. Creating embedded caches | Chapter 2. Creating embedded caches Data Grid provides an EmbeddedCacheManager API that lets you control both the Cache Manager and embedded cache lifecycles programmatically. 2.1. Adding Data Grid to your project Add Data Grid to your project to create embedded caches in your applications. Prerequisites Configure your project to get Data Grid artifacts from the Maven repository. Procedure Add the infinispan-core artifact as a dependency in your pom.xml as follows: <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> </dependency> </dependencies> 2.2. Creating and using embedded caches Data Grid provides a GlobalConfigurationBuilder API that controls the Cache Manager and a ConfigurationBuilder API that configures caches. Prerequisites Add the infinispan-core artifact as a dependency in your pom.xml . Procedure Initialize a CacheManager . Note You must always call the cacheManager.start() method to initialize a CacheManager before you can create caches. Default constructors do this for you but there are overloaded versions of the constructors that do not. Cache Managers are also heavyweight objects and Data Grid recommends instantiating only one instance per JVM. Use the ConfigurationBuilder API to define cache configuration. Obtain caches with getCache() , createCache() , or getOrCreateCache() methods. Data Grid recommends using the getOrCreateCache() method because it either creates a cache on all nodes or returns an existing cache. If necessary use the PERMANENT flag for caches to survive restarts. Stop the CacheManager by calling the cacheManager.stop() method to release JVM resources and gracefully shutdown any caches. // Set up a clustered Cache Manager. GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder(); // Initialize the default Cache Manager. DefaultCacheManager cacheManager = new DefaultCacheManager(global.build()); // Create a distributed cache with synchronous replication. ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC); // Obtain a volatile cache. Cache<String, String> cache = cacheManager.administration().withFlags(CacheContainerAdmin.AdminFlag.VOLATILE).getOrCreateCache("myCache", builder.build()); // Stop the Cache Manager. cacheManager.stop(); getCache() method Invoke the getCache(String) method to obtain caches, as follows: Cache<String, String> myCache = manager.getCache("myCache"); The preceding operation creates a cache named myCache , if it does not already exist, and returns it. Using the getCache() method creates the cache only on the node where you invoke the method. In other words, it performs a local operation that must be invoked on each node across the cluster. Typically, applications deployed across multiple nodes obtain caches during initialization to ensure that caches are symmetric and exist on each node. createCache() method Invoke the createCache() method to create caches dynamically across the entire cluster. Cache<String, String> myCache = manager.administration().createCache("myCache", "myTemplate"); The preceding operation also automatically creates caches on any nodes that subsequently join the cluster. Caches that you create with the createCache() method are ephemeral by default. If the entire cluster shuts down, the cache is not automatically created again when it restarts. PERMANENT flag Use the PERMANENT flag to ensure that caches can survive restarts. Cache<String, String> myCache = manager.administration().withFlags(AdminFlag.PERMANENT).createCache("myCache", "myTemplate"); For the PERMANENT flag to take effect, you must enable global state and set a configuration storage provider. For more information about configuration storage providers, see GlobalStateConfigurationBuilder#configurationStorage() . Additional resources EmbeddedCacheManager EmbeddedCacheManager Configuration org.infinispan.configuration.global.GlobalConfiguration org.infinispan.configuration.cache.ConfigurationBuilder 2.3. Cache API Data Grid provides a Cache interface that exposes simple methods for adding, retrieving and removing entries, including atomic mechanisms exposed by the JDK's ConcurrentMap interface. Based on the cache mode used, invoking these methods will trigger a number of things to happen, potentially even including replicating an entry to a remote node or looking up an entry from a remote node, or potentially a cache store. For simple usage, using the Cache API should be no different from using the JDK Map API, and hence migrating from simple in-memory caches based on a Map to Data Grid's Cache should be trivial. Performance Concerns of Certain Map Methods Certain methods exposed in Map have certain performance consequences when used with Data Grid, such as size() , values() , keySet() and entrySet() . Specific methods on the keySet , values and entrySet are fine for use please see their Javadoc for further details. Attempting to perform these operations globally would have large performance impact as well as become a scalability bottleneck. As such, these methods should only be used for informational or debugging purposes only. It should be noted that using certain flags with the withFlags() method can mitigate some of these concerns, please check each method's documentation for more details. Mortal and Immortal Data Further to simply storing entries, Data Grid's cache API allows you to attach mortality information to data. For example, simply using put(key, value) would create an immortal entry, i.e., an entry that lives in the cache forever, until it is removed (or evicted from memory to prevent running out of memory). If, however, you put data in the cache using put(key, value, lifespan, timeunit) , this creates a mortal entry, i.e., an entry that has a fixed lifespan and expires after that lifespan. In addition to lifespan , Data Grid also supports maxIdle as an additional metric with which to determine expiration. Any combination of lifespans or maxIdles can be used. putForExternalRead operation Data Grid's Cache class contains a different 'put' operation called putForExternalRead . This operation is particularly useful when Data Grid is used as a temporary cache for data that is persisted elsewhere. Under heavy read scenarios, contention in the cache should not delay the real transactions at hand, since caching should just be an optimization and not something that gets in the way. To achieve this, putForExternalRead() acts as a put call that only operates if the key is not present in the cache, and fails fast and silently if another thread is trying to store the same key at the same time. In this particular scenario, caching data is a way to optimise the system and it's not desirable that a failure in caching affects the on-going transaction, hence why failure is handled differently. putForExternalRead() is considered to be a fast operation because regardless of whether it's successful or not, it doesn't wait for any locks, and so returns to the caller promptly. To understand how to use this operation, let's look at basic example. Imagine a cache of Person instances, each keyed by a PersonId , whose data originates in a separate data store. The following code shows the most common pattern of using putForExternalRead within the context of this example: // Id of the person to look up, provided by the application PersonId id = ...; // Get a reference to the cache where person instances will be stored Cache<PersonId, Person> cache = ...; // First, check whether the cache contains the person instance // associated with with the given id Person cachedPerson = cache.get(id); if (cachedPerson == null) { // The person is not cached yet, so query the data store with the id Person person = dataStore.lookup(id); // Cache the person along with the id so that future requests can // retrieve it from memory rather than going to the data store cache.putForExternalRead(id, person); } else { // The person was found in the cache, so return it to the application return cachedPerson; } Note that putForExternalRead should never be used as a mechanism to update the cache with a new Person instance originating from application execution (i.e. from a transaction that modifies a Person's address). When updating cached values, please use the standard put operation, otherwise the possibility of caching corrupt data is likely. 2.3.1. AdvancedCache API In addition to the simple Cache interface, Data Grid offers an AdvancedCache interface, geared towards extension authors. The AdvancedCache offers the ability to access certain internal components and to apply flags to alter the default behavior of certain cache methods. The following code snippet depicts how an AdvancedCache can be obtained: AdvancedCache advancedCache = cache.getAdvancedCache(); 2.3.1.1. Flags Flags are applied to regular cache methods to alter the behavior of certain methods. For a list of all available flags, and their effects, see the Flag enumeration. Flags are applied using AdvancedCache.withFlags() . This builder method can be used to apply any number of flags to a cache invocation, for example: advancedCache.withFlags(Flag.CACHE_MODE_LOCAL, Flag.SKIP_LOCKING) .withFlags(Flag.FORCE_SYNCHRONOUS) .put("hello", "world"); 2.3.2. Asynchronous API In addition to synchronous API methods like Cache.put() , Cache.remove() , etc., Data Grid also has an asynchronous, non-blocking API where you can achieve the same results in a non-blocking fashion. These methods are named in a similar fashion to their blocking counterparts, with "Async" appended. E.g., Cache.putAsync() , Cache.removeAsync() , etc. These asynchronous counterparts return a CompletableFuture that contains the actual result of the operation. For example, in a cache parameterized as Cache<String, String> , Cache.put(String key, String value) returns String while Cache.putAsync(String key, String value) returns CompletableFuture<String> . 2.3.2.1. Why use such an API? Non-blocking APIs are powerful in that they provide all of the guarantees of synchronous communications - with the ability to handle communication failures and exceptions - with the ease of not having to block until a call completes. This allows you to better harness parallelism in your system. For example: Set<CompletableFuture<?>> futures = new HashSet<>(); futures.add(cache.putAsync(key1, value1)); // does not block futures.add(cache.putAsync(key2, value2)); // does not block futures.add(cache.putAsync(key3, value3)); // does not block // the remote calls for the 3 puts will effectively be executed // in parallel, particularly useful if running in distributed mode // and the 3 keys would typically be pushed to 3 different nodes // in the cluster // check that the puts completed successfully for (CompletableFuture<?> f: futures) f.get(); 2.3.2.2. Which processes actually happen asynchronously? There are 4 things in Data Grid that can be considered to be on the critical path of a typical write operation. These are, in order of cost: network calls marshalling writing to a cache store (optional) locking Using the async methods will take the network calls and marshalling off the critical path. For various technical reasons, writing to a cache store and acquiring locks, however, still happens in the caller's thread. | [
"<dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> </dependency> </dependencies>",
"// Set up a clustered Cache Manager. GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder(); // Initialize the default Cache Manager. DefaultCacheManager cacheManager = new DefaultCacheManager(global.build()); // Create a distributed cache with synchronous replication. ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC); // Obtain a volatile cache. Cache<String, String> cache = cacheManager.administration().withFlags(CacheContainerAdmin.AdminFlag.VOLATILE).getOrCreateCache(\"myCache\", builder.build()); // Stop the Cache Manager. cacheManager.stop();",
"Cache<String, String> myCache = manager.getCache(\"myCache\");",
"Cache<String, String> myCache = manager.administration().createCache(\"myCache\", \"myTemplate\");",
"Cache<String, String> myCache = manager.administration().withFlags(AdminFlag.PERMANENT).createCache(\"myCache\", \"myTemplate\");",
"// Id of the person to look up, provided by the application PersonId id = ...; // Get a reference to the cache where person instances will be stored Cache<PersonId, Person> cache = ...; // First, check whether the cache contains the person instance // associated with with the given id Person cachedPerson = cache.get(id); if (cachedPerson == null) { // The person is not cached yet, so query the data store with the id Person person = dataStore.lookup(id); // Cache the person along with the id so that future requests can // retrieve it from memory rather than going to the data store cache.putForExternalRead(id, person); } else { // The person was found in the cache, so return it to the application return cachedPerson; }",
"AdvancedCache advancedCache = cache.getAdvancedCache();",
"advancedCache.withFlags(Flag.CACHE_MODE_LOCAL, Flag.SKIP_LOCKING) .withFlags(Flag.FORCE_SYNCHRONOUS) .put(\"hello\", \"world\");",
"Set<CompletableFuture<?>> futures = new HashSet<>(); futures.add(cache.putAsync(key1, value1)); // does not block futures.add(cache.putAsync(key2, value2)); // does not block futures.add(cache.putAsync(key3, value3)); // does not block // the remote calls for the 3 puts will effectively be executed // in parallel, particularly useful if running in distributed mode // and the 3 keys would typically be pushed to 3 different nodes // in the cluster // check that the puts completed successfully for (CompletableFuture<?> f: futures) f.get();"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/creating-embedded-caches |
22.2.2. Command Line Configuration | 22.2.2. Command Line Configuration Samba uses /etc/samba/smb.conf as its configuration file. If you change this configuration file, the changes do not take effect until you restart the Samba daemon with the command service smb restart . To specify the Windows workgroup and a brief description of the Samba server, edit the following lines in your smb.conf file: Replace WORKGROUPNAME with the name of the Windows workgroup to which this machine should belong. The BRIEF COMMENT ABOUT SERVER is optional and is used as the Windows comment about the Samba system. To create a Samba share directory on your Linux system, add the following section to your smb.conf file (after modifying it to reflect your needs and your system): The above example allows the users tfox and carole to read and write to the directory /home/share , on the Samba server, from a Samba client. | [
"workgroup = WORKGROUPNAME server string = BRIEF COMMENT ABOUT SERVER",
"[ sharename ] comment = Insert a comment here path = /home/share/ valid users = tfox carole public = no writable = yes printable = no create mask = 0765"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Configuring_a_Samba_Server-Command_Line_Configuration |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_the_red_hat_build_of_cryostat_operator_to_configure_cryostat/making-open-source-more-inclusive |
19.2. Kernel Configuration Changes | 19.2. Kernel Configuration Changes Hardware Enablement Bluetooth (disabled) WIRELESS (disabled) CPU_IDLE (enabled) GPIO_DWAPB (enabled) I2C (enabled) - Designware, QUP, and XLP9XX sensor support: IIO drivers (disabled) Accel sensors (disabled) light + orientation + interrupt trigger (disabled) Input driver mouse, synaptics, rmi4 LED Intel SS4200 (disabled) Generic IRQ CHIP (enabled) Hibernate (enabled) Clock Source DATA (enabled) OSS_CORE (disabled) all SND drivers (disabled) Networking Driver Support Thunder2 driver (enabled) Amazon (enabled) Altera (disabled) ARC (disabled) Broadcom B44, BCMGENET, BNX2X_VLAN, CNIC (disabled) Hisilicon (enabled) cadence MACB (disabled) Chelsio T3 (disabled) Intel E1000 (disabled) Mellanox (enabled) myri10GE (disabled) Qlogic - qla2xxx, netxen_nic, Qed, Qede (enabled) Qualcomm - qcom_emac (enabled) Broadcom - bcm7xxx (disabled) Infiniband Support CXBG4 (enabled) I40IW (enabled) MLX4 (enabled) MLX5 (enabled) IPOIB (enabled) IPOIB_CM (enabled) IPOIB_DEBUG (enabled) ISERT (enabled) SRP (enabled) SRPT (enabled) Core Kernel Support Schedule Imbalance (enabled) 48 bit VA support (enabled) tick cpu accounting (disabled) Context Tracking (enabled) RCU NOCB (enabled) CGROUP-Hugetlb (enabled) CRIU (enabled) BPF_SYSCALL (disabled) PERF_USE_VMALLOC (disabled) HZ_100/HZ (enabled) NO_HZ_IDLE (disabled) NO_HZ_FULL (enabled) BPF_EVENTS (disabled) LZ4 compression (disabled) BTREE (enabled) CPUMASK_OFFSTACK (disabled) DEBUG_INFO_DWARF4 (enabled) SCHEDSTATS (enabled) Striaght DEVMEM (disabled) Transparent Hugepage (HTP) (enabled) ZSMaLLOC_STAT, IDLE_PAGE_TRACKING(enabled) PAGE_EXTENSION and PAGE_POISONING (disabled) Networking Stack Support SLIP - (enabled) JME (disabled) IPVLAN (disabled) BPF_JIT (disabled) dccp (disabled) [ipv4] NET_FOU, Diag, CDG, NV (disabled) [ipv6] ILA (disabled), GRE (enabled) MAC80211 (disabled) netfilter_conntrack (enabled) Graphic and GPU Support DRM_I2C_SIL64 (disabled) TTY serial_nonstandard, cyclades, synclinkmp, synclink_gt, N_HDLC, serial_8250_MID (enabled) fbdev (enabled) USB - PHY (disabled) Storage Support Block scsi request (enabled) Block debugfs (enabled) Block Multi-Queue PCI (enabled) Block Multi-Queue VirtIO (enabled) Block Multi-Queue IOSched_deadline (enabled) MD Long Write -(disabled) SCSI - ARCMSR, AM53C974, WD719x, BNX2X_FCOE, BNX2_ISCSI, ESAS2R (disabled) SCSI - HISI_SAS (enabled) SPI - QUP, SLP (enabled) SSB (disabled) File Systems FS_DAX (enabled) BTRFS (disabled) Ceph (enabled) DLM (disabled) FSCAHE (disabled) GFS2 (disabled) Swap over NFS (disabled) NFS-FSCACHE (enabled) Virtualization and KVM Support KVM_IRQCHIP, KVM_IRQ_ROUTING, KVM_MSI (enabled) Virtio - noiommu (enabled) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/arm-kerenel-configuration-changes |
Chapter 9. Migrating your applications | Chapter 9. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or the command line . Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. During migration, the MTC preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 9.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 9.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 9.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 9.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites Cross-origin resource sharing must be configured on the source cluster. If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc create token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ Log in to the MTC web console. In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. When an OpenShift Container Platform cluster is created on Microsoft Azure, an Azure Resource Group is created to contain all resources associated with the cluster. In the Azure CLI, you can display all resource groups by issuing the following command: USD az group list ResourceGroups associated with OpenShift Container Platform clusters are tagged, where sample-rg-name is the value you would extract and supply to the UI: { "id": "/subscriptions/...//resourceGroups/sample-rg-name", "location": "centralus", "name": "...", "properties": { "provisioningState": "Succeeded" }, "tags": { "kubernetes.io_cluster.sample-ld57c": "owned", "openshift_creationDate": "2019-10-25T23:28:57.988208+00:00" }, "type": "Microsoft.Resources/resourceGroups" }, This information is also available from the Azure Portal in the Resource groups blade. Require SSL verification : Optional: Select this option to verify the Secure Socket Layer (SSL) connection to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 9.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 9.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources for persistent volume copy methods MTC file system copy method MTC snapshot copy method 9.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned. | [
"oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'",
"oc create token migration-controller -n openshift-migration",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ",
"oc create route passthrough --service=docker-registry --port=5000 -n default",
"oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry",
"az group list",
"{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" },"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/migration_toolkit_for_containers/migrating-applications-with-mtc |
Chapter 2. Running Red Hat Quay in debug mode | Chapter 2. Running Red Hat Quay in debug mode Red Hat recommends gathering your debugging information when opening a support case. Running Red Hat Quay in debug mode provides verbose logging to help administrators find more information about various issues. Enabling debug mode can speed up the process to reproduce errors and validate a solution for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Additionally, it helps the Red Hat Support to perform a root cause analysis. 2.1. Running a standalone Red Hat Quay deployment in debug mode Running Red Hat Quay in debug mode provides verbose logging to help administrators find more information about various issues. Enabling debug mode can speed up the process to reproduce errors and validate a solution. Use the following procedure to run a standalone deployment of Red Hat Quay in debug mode. Procedure Enter the following command to run your standalone Red Hat Quay deployment in debug mode: USD podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv} To view the debug logs, enter the following command: USD podman logs quay 2.2. Running the Red Hat Quay Operator in debug mode Use the following procedure to run the Red Hat Quay Operator in debug mode. Procedure Enter the following command to edit the QuayRegistry custom resource definition: USD oc edit quayregistry <quay_registry_name> -n <quay_namespace> Update the QuayRegistry to add the following parameters: spec: - kind: quay managed: true overrides: env: - name: DEBUGLOG value: "true" After the Red Hat Quay Operator has restarted with debugging enabled, try pulling an image from the registry. If it is still slow, dump all dogs from all Quay pods to a file, and check the files for more information. | [
"podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv}",
"podman logs quay",
"oc edit quayregistry <quay_registry_name> -n <quay_namespace>",
"spec: - kind: quay managed: true overrides: env: - name: DEBUGLOG value: \"true\""
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/troubleshooting_red_hat_quay/running-quay-debug-mode-intro |
Chapter 378. XSLT Component | Chapter 378. XSLT Component Available as of Camel version 1.3 The xslt: component allows you to process a message using an XSLT template. This can be ideal when using Templating to generate respopnses for requests. 378.1. URI format The URI format contains templateName , which can be one of the following: the classpath-local URI of the template to invoke the complete URL of the remote template. You can append query options to the URI in the following format: ?option=value&option=value&... Refer to the Spring Documentation for more detail of the URI syntax. Table 378.1. Example URIs URI Description xslt:com/acme/mytransform.xsl Refers to the file com/acme/mytransform.xsl on the classpath xslt:file:///foo/bar.xsl Refers to the file /foo/bar.xsl xslt:http://acme.com/cheese/foo.xsl Refers to the remote http resource For Camel 2.8 or older, Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> From Camel 2.9 onwards the XSLT component is provided directly in the camel-core. 378.2. Options The XSLT component supports 9 options, which are listed below. Name Description Default Type xmlConverter (advanced) To use a custom implementation of org.apache.camel.converter.jaxp.XmlConverter XmlConverter uriResolverFactory (advanced) To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. XsltUriResolverFactory uriResolver (advanced) To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. URIResolver contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean saxon (producer) Whether to use Saxon as the transformerFactoryClass. If enabled then the class net.sf.saxon.TransformerFactoryImpl. You would need to add Saxon to the classpath. false boolean saxonExtensionFunctions (advanced) Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can comma to separate multiple values to lookup. String saxonConfiguration (advanced) To use a custom Saxon configuration Object saxonConfiguration Properties (advanced) To set custom Saxon configuration properties Map resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The XSLT endpoint is configured using URI syntax: with the following path and query parameters: 378.2.1. Path Parameters (1 parameters): Name Description Default Type resourceUri Required Path to the template. The following is supported by the default URIResolver. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod String 378.2.2. Query Parameters (17 parameters): Name Description Default Type allowStAX (producer) Whether to allow using StAX as the javax.xml.transform.Source. true boolean contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean deleteOutputFile (producer) If you have output=file then this option dictates whether or not the output file should be deleted when the Exchange is done processing. For example suppose the output file is a temporary file, then it can be a good idea to delete it after use. false boolean failOnNullBody (producer) Whether or not to throw an exception if the input body is null. true boolean output (producer) Option to specify which output type to use. Possible values are: string, bytes, DOM, file. The first three options are all in memory based, where as file is streamed directly to a java.io.File. For file you must specify the filename in the IN header with the key Exchange.XSLT_FILE_NAME which is also CamelXsltFileName. Also any paths leading to the filename must be created beforehand, otherwise an exception is thrown at runtime. string XsltOutput saxon (producer) Whether to use Saxon as the transformerFactoryClass. If enabled then the class net.sf.saxon.TransformerFactoryImpl. You would need to add Saxon to the classpath. false boolean transformerCacheSize (producer) The number of javax.xml.transform.Transformer object that are cached for reuse to avoid calls to Template.newTransformer(). 0 int converter (advanced) To use a custom implementation of org.apache.camel.converter.jaxp.XmlConverter XmlConverter entityResolver (advanced) To use a custom org.xml.sax.EntityResolver with javax.xml.transform.sax.SAXSource. EntityResolver errorListener (advanced) Allows to configure to use a custom javax.xml.transform.ErrorListener. Beware when doing this then the default error listener which captures any errors or fatal errors and store information on the Exchange as properties is not in use. So only use this option for special use-cases. ErrorListener resultHandlerFactory (advanced) Allows you to use a custom org.apache.camel.builder.xml.ResultHandlerFactory which is capable of using custom org.apache.camel.builder.xml.ResultHandler types. ResultHandlerFactory saxonConfiguration (advanced) To use a custom Saxon configuration Object saxonExtensionFunctions (advanced) Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can comma to separate multiple values to lookup. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean transformerFactory (advanced) To use a custom XSLT transformer factory TransformerFactory transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name String uriResolver (advanced) To use a custom javax.xml.transform.URIResolver URIResolver 378.3. Using XSLT endpoints The following format is an expample of using an XSLT template to formulate a response for a message for InOut message exchanges (where there is a JMSReplyTo header) from("activemq:My.Queue"). to("xslt:com/acme/mytransform.xsl"); If you want to use InOnly and consume the message and send it to another destination you could use the following route: from("activemq:My.Queue"). to("xslt:com/acme/mytransform.xsl"). to("activemq:Another.Queue"); 378.4. Getting Useable Parameters into the XSLT By default, all headers are added as parameters which are then available in the XSLT. To make the parameters useable, you will need to declare them. <setHeader headerName="myParam"><constant>42</constant></setHeader> <to uri="xslt:MyTransform.xsl"/> The parameter also needs to be declared in the top level of the XSLT for it to be available: <xsl: ...... > <xsl:param name="myParam"/> <xsl:template ...> 378.5. Spring XML versions To use the above examples in Spring XML you would use something like the following code: <camelContext xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="activemq:My.Queue"/> <to uri="xslt:org/apache/camel/spring/processor/example.xsl"/> <to uri="activemq:Another.Queue"/> </route> </camelContext> To see an example, look at the test case along with its Spring XML . 378.6. Using xsl:include Camel 2.2 or older If you use xsl:include in your XSL files in Camel 2.2 or older , the default javax.xml.transform.URIResolver is used. Files will be resolved relative to the JVM starting folder. For example the following include statement will look up the staff_template.xsl file starting from the folder where the application was started. <xsl:include href="staff_template.xsl"/> Camel 2.3 or newer For Camel 2.3 or newer, Camel provides its own implementation of URIResolver . This allows Camel to load included files from the classpath. For example the include file in the following code will be located relative to the starting endpoint. <xsl:include href="staff_template.xsl"/> This means that Camel will locate the file in the classpath as org/apache/camel/component/xslt/staff_template.xsl You can use classpath: or file: to instruct Camel to look either in the classpath or file system. If you omit the prefix then Camel uses the prefix from the endpoint configuration. If no prefix is specified in the endpoint configuration, the default is classpath: . You can also refer backwards in the include paths. In the following example, the xsl file will be resolved under org/apache/camel/component . <xsl:include href="../staff_other_template.xsl"/> 378.7. Using xsl:include and default prefix In Camel 2.10.3 and older , classpath: is used as the default prefix. If you configure the starting resource to load using file: then all subsequent incudes will have to be prefixed with file: . From Camel 2.10.4 , Camel will use the prefix from the endpoint configuration as the default prefix. You can explicitly specify file: or classpath: loading. The two loading types can be mixed in a XSLT script, if necessary. 378.8. Using Saxon extension functions Since Saxon 9.2, writing extension functions has been supplemented by a new mechanism, referred to as integrated extension functions you can now easily use camel as shown in the below example: SimpleRegistry registry = new SimpleRegistry(); registry.put("function1", new MyExtensionFunction1()); registry.put("function2", new MyExtensionFunction2()); CamelContext context = new DefaultCamelContext(registry); context.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start") .to("xslt:org/apache/camel/component/xslt/extensions/extensions.xslt?saxonExtensionFunctions=#function1,#function2"); } }); With Spring XML: <bean id="function1" class="org.apache.camel.component.xslt.extensions.MyExtensionFunction1"/> <bean id="function2" class="org.apache.camel.component.xslt.extensions.MyExtensionFunction2"/> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:extensions"/> <to uri="xslt:org/apache/camel/component/xslt/extensions/extensions.xslt?saxonExtensionFunctions=#function1,#function2"/> </route> </camelContext> 378.9. Dynamic stylesheets To provide a dynamic stylesheet at runtime you can define a dynamic URI. See How to use a dynamic URI in to() for more information. Available as of Camel 2.9 (removed in 2.11.4, 2.12.3 and 2.13.0) Camel provides the CamelXsltResourceUri header which you can use to define an alternative stylesheet to that configured on the endpoint URI. This allows you to provide a dynamic stylesheet at runtime. 378.10. Accessing warnings, errors and fatalErrors from XSLT ErrorListener Available as of Camel 2.14 From Camel 2.14 , any warning/error or fatalError is stored on the current Exchange as a property with the keys Exchange.XSLT_ERROR , Exchange.XSLT_FATAL_ERROR , or Exchange.XSLT_WARNING which allows end users to get hold of any errors happening during transformation. For example in the stylesheet below, we want to terminate if a staff has an empty dob field. And to include a custom error message using xsl:message. <xsl:template match="/"> <html> <body> <xsl:for-each select="staff/programmer"> <p>Name: <xsl:value-of select="name"/><br /> <xsl:if test="dob=''"> <xsl:message terminate="yes">Error: DOB is an empty string!</xsl:message> </xsl:if> </p> </xsl:for-each> </body> </html> </xsl:template> The exception is stored on the Exchange as a warning with the key Exchange.XSLT_WARNING. 378.11. Notes on using XSLT and Java Versions Here are some observations from Sameer, a Camel user, which he kindly shared with us: In case anybody faces issues with the XSLT endpoint please review these points. I was trying to use an xslt endpoint for a simple transformation from one xml to another using a simple xsl. The output xml kept appearing (after the xslt processor in the route) with outermost xml tag with no content within. No explanations show up in the DEBUG logs. On the TRACE logs however I did find some error/warning indicating that the XMLConverter bean could no be initialized. After a few hours of cranking my mind, I had to do the following to get it to work (thanks to some posts on the users forum that gave some clue): Use the transformerFactory option in the route ("xslt:my-transformer.xsl?transformerFactory=tFactory") with the tFactory bean having bean defined in the spring context for class="org.apache.xalan.xsltc.trax.TransformerFactoryImpl" . Added the Xalan jar into my maven pom. My guess is that the default xml parsing mechanism supplied within the JDK (I am using 1.6.0_03) does not work right in this context and does not throw up any error either. When I switched to Xalan this way it works. This is not a Camel issue, but might need a mention on the xslt component page. Another note, jdk 1.6.0_03 ships with JAXB 2.0 while Camel needs 2.1. One workaround is to add the 2.1 jar to the jre/lib/endorsed directory for the jvm or as specified by the container. Hope this post saves newbie Camel riders some time. 378.12. See Also Configuring Camel Component Endpoint Getting Started | [
"xslt:templateName[?options]",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"xslt:resourceUri",
"from(\"activemq:My.Queue\"). to(\"xslt:com/acme/mytransform.xsl\");",
"from(\"activemq:My.Queue\"). to(\"xslt:com/acme/mytransform.xsl\"). to(\"activemq:Another.Queue\");",
"<setHeader headerName=\"myParam\"><constant>42</constant></setHeader> <to uri=\"xslt:MyTransform.xsl\"/>",
"<xsl: ...... > <xsl:param name=\"myParam\"/> <xsl:template ...>",
"<camelContext xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <route> <from uri=\"activemq:My.Queue\"/> <to uri=\"xslt:org/apache/camel/spring/processor/example.xsl\"/> <to uri=\"activemq:Another.Queue\"/> </route> </camelContext>",
"<xsl:include href=\"staff_template.xsl\"/>",
"<xsl:include href=\"staff_template.xsl\"/>",
"<xsl:include href=\"../staff_other_template.xsl\"/>",
"SimpleRegistry registry = new SimpleRegistry(); registry.put(\"function1\", new MyExtensionFunction1()); registry.put(\"function2\", new MyExtensionFunction2()); CamelContext context = new DefaultCamelContext(registry); context.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\") .to(\"xslt:org/apache/camel/component/xslt/extensions/extensions.xslt?saxonExtensionFunctions=#function1,#function2\"); } });",
"<bean id=\"function1\" class=\"org.apache.camel.component.xslt.extensions.MyExtensionFunction1\"/> <bean id=\"function2\" class=\"org.apache.camel.component.xslt.extensions.MyExtensionFunction2\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:extensions\"/> <to uri=\"xslt:org/apache/camel/component/xslt/extensions/extensions.xslt?saxonExtensionFunctions=#function1,#function2\"/> </route> </camelContext>",
"<xsl:template match=\"/\"> <html> <body> <xsl:for-each select=\"staff/programmer\"> <p>Name: <xsl:value-of select=\"name\"/><br /> <xsl:if test=\"dob=''\"> <xsl:message terminate=\"yes\">Error: DOB is an empty string!</xsl:message> </xsl:if> </p> </xsl:for-each> </body> </html> </xsl:template>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/xslt-component |
2.4. Obtaining Information about Control Groups | 2.4. Obtaining Information about Control Groups Use the systemctl command to list system units and to view their status. Also, the systemd-cgls command is provided to view the hierarchy of control groups and systemd-cgtop to monitor their resource consumption in real time. 2.4.1. Listing Units Use the following command to list all active units on the system: The list-units option is executed by default, which means that you will receive the same output when you omit this option and execute just: The output displayed above contains five columns: UNIT - the name of the unit that also reflects the unit's position in the cgroup tree. As mentioned in the section called "Systemd Unit Types" , three unit types are relevant for resource control: slice , scope , and service . For a complete list of systemd 's unit types, see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrators Guide . LOAD - indicates whether the unit configuration file was properly loaded. If the unit file failed to load, the field contains the state error instead of loaded . Other unit load states are: stub , merged , and masked . ACTIVE - the high-level unit activation state, which is a generalization of SUB. SUB - the low-level unit activation state. The range of possible values depends on the unit type. DESCRIPTION - the description of the unit's content and functionality. By default, systemctl lists only active units (in terms of high-level activations state in the ACTIVE field). Use the --all option to see inactive units too. To limit the amount of information in the output list, use the --type ( -t ) parameter that requires a comma-separated list of unit types such as service and slice , or unit load states such as loaded and masked . Example 2.8. Using systemctl list-units To view a list of all slices used on the system, type: To list all active masked services, type: To list all unit files installed on your system and their status, type: 2.4.2. Viewing the Control Group Hierarchy The aforementioned listing commands do not go beyond the unit level to show the actual processes running in cgroups. Also, the output of systemctl does not show the hierarchy of units. You can achieve both by using the systemd-cgls command that groups the running process according to cgroups. To display the whole cgroup hierarchy on your system, type: When systemd-cgls is issued without parameters, it returns the entire cgroup hierarchy. The highest level of the cgroup tree is formed by slices and can look as follows: Note that machine slice is present only if you are running a virtual machine or a container. For more information on the cgroup tree, see the section called "Systemd Unit Types" . To reduce the output of systemd-cgls , and to view a specified part of the hierarchy, execute: Replace name with a name of the resource controller you want to inspect. As an alternative, use the systemctl status command to display detailed information about a system unit. A cgroup subtree is a part of the output of this command. To learn more about systemctl status , see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrators Guide . Example 2.9. Viewing the Control Group Hierarchy To see a cgroup tree of the memory resource controller, execute: The output of the above command lists the services that interact with the selected controller. A different approach is to view a part of the cgroup tree for a certain service, slice, or scope unit: Besides the aforementioned tools, systemd also provides the machinectl command dedicated to monitoring Linux containers. 2.4.3. Viewing Resource Controllers The aforementioned systemctl commands enable monitoring the higher-level unit hierarchy, but do not show which resource controllers in Linux kernel are actually used by which processes. This information is stored in dedicated process files, to view it, type as root : Where PID stands for the ID of the process you wish to examine. By default, the list is the same for all units started by systemd , since it automatically mounts all default controllers. See the following example: By examining this file, you can determine if the process has been placed in the correct cgroups as defined by the systemd unit file specifications. 2.4.4. Monitoring Resource Consumption The systemd-cgls command provides a static snapshot of the cgroup hierarchy. To see a dynamic account of currently running cgroups ordered by their resource usage (CPU, Memory, and IO), use: The behavior, provided statistics, and control options of systemd-cgtop are akin of those of the top utility. See systemd-cgtop (1) manual page for more information. | [
"~]# systemctl list-units",
"~]USD systemctl UNIT LOAD ACTIVE SUB DESCRIPTION abrt-ccpp.service loaded active exited Install ABRT coredump hook abrt-oops.service loaded active running ABRT kernel log watcher abrt-vmcore.service loaded active exited Harvest vmcores for ABRT abrt-xorg.service loaded active running ABRT Xorg log watcher",
"~]USD systemctl -t slice",
"~]USD systemctl -t service,masked",
"~]USD systemctl list-unit-files",
"~]USD systemd-cgls",
"├─system │ ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 20 │ │ ├─user │ ├─user-1000 │ │ └─ │ ├─user-2000 │ │ └─ │ │ └─machine ├─machine-1000 │ └─",
"~]USD systemd-cgls name",
"~]USD systemctl name",
"~]USD systemd-cgls memory memory: ├─ 1 /usr/lib/systemd/systemd --switched-root --system --deserialize 23 ├─ 475 /usr/lib/systemd/systemd-journald",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: active (running) since Sun 2014-03-23 08:01:14 MDT; 33min ago Process: 3385 ExecReload=/usr/sbin/httpd USDOPTIONS -k graceful (code=exited, status=0/SUCCESS) Main PID: 1205 (httpd) Status: \"Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec\" CGroup: /system.slice/httpd.service ├─1205 /usr/sbin/httpd -DFOREGROUND ├─3387 /usr/sbin/httpd -DFOREGROUND ├─3388 /usr/sbin/httpd -DFOREGROUND ├─3389 /usr/sbin/httpd -DFOREGROUND ├─3390 /usr/sbin/httpd -DFOREGROUND └─3391 /usr/sbin/httpd -DFOREGROUND",
"~]# cat proc/ PID /cgroup",
"~]# cat proc/ 27 /cgroup 10:hugetlb:/ 9:perf_event:/ 8:blkio:/ 7:net_cls:/ 6:freezer:/ 5:devices:/ 4:memory:/ 3:cpuacct,cpu:/ 2:cpuset:/ 1:name=systemd:/",
"~]# systemd-cgtop"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/sec-obtaining_information_about_control_groups |
2.8. Changing Runlevels at Boot Time | 2.8. Changing Runlevels at Boot Time Under Red Hat Enterprise Linux, it is possible to change the default runlevel at boot time. To change the runlevel of a single boot session, use the following instructions: When the GRUB menu bypass screen appears at boot time, press any key to enter the GRUB menu (within the first three seconds). Press the a key to append to the kernel command. Add <space> <runlevel> at the end of the boot options line to boot to the desired runlevel. For example, the following entry would initiate a boot process into runlevel 3: | [
"grub append> ro root=/dev/VolGroup00/LogVol00 rhgb quiet 3"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-grub-runlevels |
Chapter 1. Overview | Chapter 1. Overview AMQ Broker is a high-performance messaging implementation based on ActiveMQ Artemis. It has fast, journal-based message persistence and supports multiple languages, protocols, and platforms. AMQ Broker provides multiple interfaces for managing and interacting with your broker instances, such as a management console, management APIs, and a command-line interface. In addition, you can monitor broker performance by collecting runtime metrics, configure brokers to proactively monitor for problems such as deadlock conditions, and interactively check the health of brokers and queues. This guide provides detailed information about typical broker management tasks such as: Upgrading your broker instances Using the command-line interface and management API Checking the health of brokers and queues Collecting broker runtime metrics Proactively monitoring critical broker operations 1.1. Supported configurations Refer to the article " Red Hat AMQ 7 Supported Configurations " on the Red Hat Customer Portal for current information regarding AMQ Broker supported configurations. 1.2. Document conventions This document uses the following conventions for the sudo command, file paths, and replaceable values. The sudo command In this document, sudo is used for any command that requires root privileges. You should always exercise caution when using sudo , as any changes can affect the entire system. For more information about using sudo , see Managing sudo access . About the use of file paths in this document In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/... ). If you are using Microsoft Windows, you should use the equivalent Microsoft Windows paths (for example, C:\Users\... ). Replaceable values This document sometimes uses replaceable values that you must replace with values specific to your environment. Replaceable values are lowercase, enclosed by angle brackets ( < > ), and are styled using italics and monospace font. Multiple words are separated by underscores ( _ ) . For example, in the following command, replace <install_dir> with your own directory name. USD <install_dir> /bin/artemis create mybroker | [
"<install_dir> /bin/artemis create mybroker"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/managing_amq_broker/assembly-br-managing-overview-managing |
Chapter 32. Downloading Red Hat build of OptaPlanner examples | Chapter 32. Downloading Red Hat build of OptaPlanner examples You can download the Red Hat build of OptaPlanner examples as a part of the {PRODUCTPAM} add-ons package available on the Red Hat Customer Portal. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13 Add Ons . Extract the rhpam-7.13.5-add-ons.zip file. The extracted add-ons folder contains the rhpam-7.13.5-planner-engine.zip file. Extract the rhpam-7.13.5-planner-engine.zip file. Result The extracted rhpam-7.13.5-planner-engine directory contains example source code under the following subdirectories: examples/sources/src/main/java/org/optaplanner/examples examples/sources/src/main/resources/org/optaplanner/examples 32.1. Running OptaPlanner examples Red Hat build of OptaPlanner includes several examples that demonstrate a variety of planning use cases. Download and use the examples to explore different types of planning solutions. Prerequisites You have downloaded and extracted the examples as described in Chapter 32, Downloading Red Hat build of OptaPlanner examples . Procedure To run the examples, in the rhpam-7.13.5-planner-engine/examples directory enter one of the following commands: Linux or Mac: Windows: The OptaPlanner Examples window opens. Select an example to run that example. Note Red Hat build of OptaPlanner has no GUI dependencies. It runs just as well on a server or a mobile JVM as it does on the desktop. 32.2. Running the Red Hat build of OptaPlanner examples in an IDE (IntelliJ, Eclipse, or Netbeans) If you use an integrated development environment (IDE), such as IntelliJ, Eclipse, or Netbeans, you can run your downloaded OptaPlanner examples within your development environment. Prerequisites You have downloaded and extracted the OptaPlanner examples as described in Chapter 32, Downloading Red Hat build of OptaPlanner examples . Procedure Open the OptaPlanner examples as a new project: For IntelliJ or Netbeans, open examples/sources/pom.xml as the new project. The Maven integration guides you through the rest of the installation. Skip the rest of the steps in this procedure. For Eclipse, open a new project for the /examples/binaries directory, located under the rhpam-7.13.5-planner-engine directory. Add all the JAR files that are in the binaries directory to the classpath, except for the examples/binaries/optaplanner-examples-7.67.0.Final-redhat-00024.jar file. Add the Java source directory src/main/java and the Java resources directory src/main/resources , located under the rhpam-7.13.5-planner-engine/examples/sources/ directory. Create a run configuration: Main class: org.optaplanner.examples.app.OptaPlannerExamplesApp VM parameters (optional): -Xmx512M -server -Dorg.optaplanner.examples.dataDir=examples/sources/data Working directory: examples/sources Run the run configuration. | [
"./runExamples.sh",
"runExamples.bat"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/examples-download-proc |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/introduction_to_the_migration_toolkit_for_applications/making-open-source-more-inclusive |
Chapter 26. Probe schema reference | Chapter 26. Probe schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaExporterSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TlsSidecar , ZookeeperClusterSpec Property Property type Description failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. initialDelaySeconds integer The initial delay before first the health is first checked. Default to 15 seconds. Minimum value is 0. periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. timeoutSeconds integer The timeout for each attempted health check. Default to 5 seconds. Minimum value is 1. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-Probe-reference |
Chapter 1. OpenShift Container Platform storage overview | Chapter 1. OpenShift Container Platform storage overview OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. 1.1. Glossary of common terms for OpenShift Container Platform storage This glossary defines common terms that are used in the storage content. Access modes Volume access modes describe volume capabilities. You can use access modes to match persistent volume claim (PVC) and persistent volume (PV). The following are the examples of access modes: ReadWriteOnce (RWO) ReadOnlyMany (ROX) ReadWriteMany (RWX) ReadWriteOncePod (RWOP) Cinder The Block Storage service for Red Hat OpenStack Platform (RHOSP) which manages the administration, security, and scheduling of all volumes. Config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container Storage Interface (CSI) An API specification for the management of container storage across different container orchestration (CO) systems. Dynamic Provisioning The framework allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision persistent storage. Ephemeral storage Pods and containers can require temporary or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Fiber channel A networking technology that is used to transfer data among data centers, computer servers, switches and storage. FlexVolume FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. You must install the FlexVolume driver binaries in a pre-defined volume plugin path on each node and in some cases the control plane nodes. fsGroup The fsGroup defines a file system group ID of a pod. iSCSI Internet Small Computer Systems Interface (iSCSI) is an Internet Protocol-based storage networking standard for linking data storage facilities. An iSCSI volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod. hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. KMS key The Key Management Service (KMS) helps you achieve the required level of encryption of your data across different services. you can use the KMS key to encrypt, decrypt, and re-encrypt data. Local volumes A local volume represents a mounted local storage device such as a disk, partition or directory. NFS A Network File System (NFS) that allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. OpenShift Data Foundation A provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds Persistent storage Pods and containers can require permanent storage for their operation. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volumes (PV) OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volume claims (PVCs) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. Reclaim policy A policy that tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Role-based access control (RBAC) Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. Stateless applications A stateless application is an application program that does not save client data generated in one session for use in the session with that client. Stateful applications A stateful application is an application program that saves data to persistent disk storage. A server, client, and applications can use a persistent disk storage. You can use the Statefulset object in OpenShift Container Platform to manage the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods. Static provisioning A cluster administrator creates a number of PVs. PVs contain the details of storage. PVs exist in the Kubernetes API and are available for consumption. Storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Storage class A storage class provides a way for administrators to describe the classes of storage they offer. Different classes might map to quality of service levels, backup policies, arbitrary policies determined by the cluster administrators. VMware vSphere's Virtual Machine Disk (VMDK) volumes Virtual Machine Disk (VMDK) is a file format that describes containers for virtual hard disk drives that is used in virtual machines. 1.2. Storage types OpenShift Container Platform storage is broadly classified into two categories, namely ephemeral storage and persistent storage. 1.2.1. Ephemeral storage Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, see Understanding ephemeral storage . 1.2.2. Persistent storage Stateful applications deployed in containers require persistent storage. OpenShift Container Platform uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, see Understanding persistent storage . 1.3. Container Storage Interface (CSI) CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, see Using Container Storage Interface (CSI) . 1.4. Dynamic Provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, see Dynamic provisioning . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage/storage-overview |
probe::nfsd.create | probe::nfsd.create Name probe::nfsd.create - NFS server creating a file(regular,dir,device,fifo) for client Synopsis nfsd.create Values fh file handle (the first part is the length of the file handle) iap_valid Attribute flags filelen the length of file name type file type(regular,dir,device,fifo ...) filename file name iap_mode file access mode client_ip the ip address of client Description Sometimes nfsd will call nfsd_create_v3 instead of this this probe point. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfsd-create |
Chapter 15. Configuring a multi-site, fault-tolerant messaging system using Ceph | Chapter 15. Configuring a multi-site, fault-tolerant messaging system using Ceph Large-scale enterprise messaging systems commonly have discrete broker clusters located in geographically distributed data centers. In the event of a data center outage, system administrators might need to preserve existing messaging data and ensure that client applications can continue to produce and consume messages. You can use specific broker topologies and Red Hat Ceph Storage, a software-defined storage platform, to ensure continuity of your messaging system during a data center outage. This type of solution is called a multi-site, fault-tolerant architecture . Note If you only require AMQP protocol support, consider Chapter 16, Configuring a multi-site, fault-tolerant messaging system using broker connections . The following sections explain how to protect your messaging system from data center outages using Red Hat Ceph Storage: How Red Hat Ceph Storage clusters work Installing and configuring a Red Hat Ceph Storage cluster Adding backup brokers to take over from live brokers in the event of a data center outage Configuring your broker servers with the Ceph client role Configuring each broker to use the shared store high-availability (HA) policy, specifying where in the Ceph File System each broker stores its messaging data Configuring client applications to connect to new brokers in the event of a data center outage Restarting a data center after an outage Note Multi-site fault tolerance is not a replacement for high-availability (HA) broker redundancy within data centers. Broker redundancy based on live-backup groups provides automatic protection against single broker failures within single clusters. By contrast, multi-site fault tolerance protects against large-scale data center outages. Note To use Red Hat Ceph Storage to ensure continuity of your messaging system, you must configure your brokers to use the shared store high-availability (HA) policy. You cannot configure your brokers to use the replication HA policy. For more information about these policies, see Implementing High Availability . 15.1. How Red Hat Ceph Storage clusters work Red Hat Ceph Storage is a clustered object storage system. Red Hat Ceph Storage uses data sharding of objects and policy-based replication to guarantee data integrity and system availability. Red Hat Ceph Storage uses an algorithm called CRUSH (Controlled Replication Under Scalable Hashing) to determine how to store and retrieve data by automatically computing data storage locations. You configure Ceph items called CRUSH maps , which detail cluster topography and specify how data is replicated across storage clusters. CRUSH maps contain lists of Object Storage Devices (OSDs), a list of 'buckets' for aggregating the devices into a failure domain hierarchy, and rules that tell CRUSH how it should replicate data in a Ceph cluster's pools. By reflecting the underlying physical organization of the installation, CRUSH maps can model - and thereby address - potential sources of correlated device failures, such as physical proximity, shared power sources, and shared networks. By encoding this information into the cluster map, CRUSH can separate object replicas across different failure domains (for example, data centers) while still maintaining a pseudo-random distribution of data across the storage cluster. This helps to prevent data loss and enables the cluster to operate in a degraded state. Red Hat Ceph Storage clusters require a number of nodes (physical or virtual) to operate. Clusters must include the following types of nodes: Monitor nodes Each Monitor (MON) node runs the monitor daemon ( ceph-mon ), which maintains a master copy of the cluster map. The cluster map includes the cluster topology. A client connecting to the Ceph cluster retrieves the current copy of the cluster map from the Monitor, which enables the client to read from and write data to the cluster. Important A Red Hat Ceph Storage cluster can run with one Monitor node; however, to ensure high availability in a production cluster, Red Hat supports only deployments with at least three Monitor nodes. A minimum of three Monitor nodes means that in the event of the failure or unavailability of one Monitor, a quorum exists for the remaining Monitor nodes in the cluster to elect a new leader. Manager nodes Each Manager (MGR) node runs the Ceph Manager daemon ( ceph-mgr ), which is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load. Usually, Manager nodes are colocated (that is, on the same host machine) with Monitor nodes. Object Storage Device nodes Each Object Storage Device (OSD) node runs the Ceph OSD daemon ( ceph-osd ), which interacts with logical disks attached to the node. Ceph stores data on OSD nodes. Ceph can run with very few OSD nodes (the default is three), but production clusters realize better performance at modest scales, for example, with 50 OSDs in a storage cluster. Having multiple OSDs in a storage cluster enables system administrators to define isolated failure domains within a CRUSH map. Metadata Server nodes Each Metadata Server (MDS) node runs the MDS daemon ( ceph-mds ), which manages metadata related to files stored on the Ceph File System (CephFS). The MDS daemon also coordinates access to the shared cluster. Additional resources For more information about Red Hat Ceph Storage, see What is Red Hat Ceph Storage? 15.2. Installing Red Hat Ceph Storage AMQ Broker multi-site, fault-tolerant architectures use Red Hat Ceph Storage 3. By replicating data across data centers, a Red Hat Ceph Storage cluster effectively creates a shared store available to brokers in separate data centers. You configure your brokers to use the shared store high-availability (HA) policy and store messaging data in the Red Hat Ceph Storage cluster. Red Hat Ceph Storage clusters intended for production use should have a minimum of: Three Monitor (MON) nodes Three Manager (MGR) nodes Three Object Storage Device (OSD) nodes containing multiple OSD daemons Three Metadata Server (MDS) nodes Important You can run the OSD, MON, MGR, and MDS nodes on either the same or separate physical or virtual machines. However, to ensure fault tolerance within your Red Hat Ceph Storage cluster, it is good practice to distribute each of these types of nodes across distinct data centers. In particular, you must ensure that in the event of a single data center outage, your storage cluster still has a minimum of two available MON nodes. Therefore, if you have three MON nodes in you cluster, each of these nodes must run on separate host machines in separate data centers. Do not run two MON nodes in a single data center, because failure of this data center will leave your storage cluster with only one remaining MON node. In this situation, the storage cluster can no longer operate. The procedures linked-to from this section show you how to install a Red Hat Ceph Storage 3 cluster that includes MON, MGR, OSD, and MDS nodes. Prerequisites For information about preparing a Red Hat Ceph Storage installation, see: Prerequisites Requirements Checklist for Installing Red Hat Ceph Storage Procedure For procedures that show how to install a Red Hat Ceph 3 storage cluster that includes MON, MGR, OSD, and MDS nodes, see: Installing a Red Hat Ceph Storage Cluster Installing Metadata Servers 15.3. Configuring a Red Hat Ceph Storage cluster This example procedure shows how to configure your Red Hat Ceph storage cluster for fault tolerance. You create CRUSH buckets to aggregate your Object Storage Device (OSD) nodes into data centers that reflect your real-life, physical installation. In addition, you create a rule that tells CRUSH how to replicate data in your storage pools. These steps update the default CRUSH map that was created by your Ceph installation. Prerequisites You have already installed a Red Hat Ceph Storage cluster. For more information, see Installing Red Hat Ceph Storage . You should understand how Red Hat Ceph Storage uses Placement Groups (PGs) to organize large numbers of data objects in a pool, and how to calculate the number of PGs to use in your pool. For more information, see Placement Groups (PGs) . You should understand how to set the number of object replicas in a pool. For more information, Set the Number of Object Replicas . Procedure Create CRUSH buckets to organize your OSD nodes. Buckets are lists of OSDs, based on physical locations such as data centers. In Ceph, these physical locations are known as failure domains . ceph osd crush add-bucket dc1 datacenter ceph osd crush add-bucket dc2 datacenter Move the host machines for your OSD nodes to the data center CRUSH buckets that you created. Replace host names host1 - host4 with the names of your host machines. ceph osd crush move host1 datacenter=dc1 ceph osd crush move host2 datacenter=dc1 ceph osd crush move host3 datacenter=dc2 ceph osd crush move host4 datacenter=dc2 Ensure that the CRUSH buckets you created are part of the default CRUSH tree. ceph osd crush move dc1 root=default ceph osd crush move dc2 root=default Create a rule to map storage object replicas across your data centers. This helps to prevent data loss and enables your cluster to stay running in the event of a single data center outage. The command to create a rule uses the following syntax: ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> . An example is shown below. ceph osd crush rule create-replicated multi-dc default datacenter hdd Note In the preceding command, if your storage cluster uses solid-state drives (SSD), specify ssd instead of hdd (hard disk drives). Configure your Ceph data and metadata pools to use the rule that you created. Initially, this might cause data to be backfilled to the storage destinations determined by the CRUSH algorithm. ceph osd pool set cephfs_data crush_rule multi-dc ceph osd pool set cephfs_metadata crush_rule multi-dc Specify the numbers of Placement Groups (PGs) and Placement Groups for Placement (PGPs) for your metadata and data pools. The PGP value should be equal to the PG value. ceph osd pool set cephfs_metadata pg_num 128 ceph osd pool set cephfs_metadata pgp_num 128 ceph osd pool set cephfs_data pg_num 128 ceph osd pool set cephfs_data pgp_num 128 Specify the numbers of replicas to be used by your data and metadata pools. ceph osd pool set cephfs_data min_size 1 ceph osd pool set cephfs_metadata min_size 1 ceph osd pool set cephfs_data size 2 ceph osd pool set cephfs_metadata size 2 The following figure shows the Red Hat Ceph Storage cluster created by the preceding example procedure. The storage cluster has OSDs organized into CRUSH buckets corresponding to data centers. The following figure shows a possible layout of the first data center, including your broker servers. Specifically, the data center hosts: The servers for two live-backup broker pairs The OSD nodes that you assigned to the first data center in the preceding procedure Single Metadata Server, Monitor and Manager nodes. The Monitor and Manager nodes are usually co-located on the same machine. Important You can run the OSD, MON, MGR, and MDS nodes on either the same or separate physical or virtual machines. However, to ensure fault tolerance within your Red Hat Ceph Storage cluster, it is good practice to distribute each of these types of nodes across distinct data centers. In particular, you must ensure that in the event of a single data center outage, you storage cluster still has a minimum of two available MON nodes. Therefore, if you have three MON nodes in you cluster, each of these nodes must run on separate host machines in separate data centers. The following figure shows a complete example topology. To ensure fault tolerance in your storage cluster, the MON, MGR, and MDS nodes are distributed across three separate data centers. Note Locating the host machines for certain OSD nodes in the same data center as your broker servers does not mean that you store messaging data on those specific OSD nodes. You configure the brokers to store messaging data in a specified directory in the Ceph File System. The Metadata Server nodes in your cluster then determine how to distribute the stored data across all available OSDs in your data centers and handle replication of this data across data centers. the sections that follow show how to configure brokers to store messaging data on the Ceph File System. The figure below illustrates replication of data between the two data centers that have broker servers. Additional resources For more information about: Administrating CRUSH for your Red Hat Ceph Storage cluster, see CRUSH Administration . The full set of attributes that you can set on a storage pool, see Pool Values . 15.4. Mounting the Ceph File System on your broker servers Before you can configure brokers in your messaging system to store messaging data in your Red Hat Ceph Storage cluster, you first need to mount a Ceph File System (CephFS). The procedure linked-to from this section shows you how to mount the CephFS on your broker servers. Prerequisites You have: Installed and configured a Red Hat Ceph Storage cluster. For more information, see Installing Red Hat Ceph Storage and Configuring a Red Hat Ceph Storage cluster . Installed and configured three or more Ceph Metadata Server daemons ( ceph-mds ). For more information, see Installing Metadata Servers and Configuring Metadata Server Daemons . Created the Ceph File System from a Monitor node. For more information, see Creating the Ceph File System . Created a Ceph File System client user with a key that your broker servers can use for authorized access. For more information, see Creating Ceph File System Client Users . Procedure For instructions on mounting the Ceph File System on your broker servers, see Mounting the Ceph File System as a kernel client . 15.5. Configuring brokers in a multi-site, fault-tolerant messaging system To configure your brokers as part of a multi-site, fault-tolerant messaging system, you need to: Add idle backup brokers to take over from live brokers in the event of a data center failure Configure all broker servers with the Ceph client role Configure each broker to use the shared store high-availability (HA) policy, specifying where in the Ceph File System the broker stores its messaging data 15.5.1. Adding backup brokers Within each of your data centers, you need to add idle backup brokers that can take over from live master-slave broker groups that shut down in the event of a data center outage. You should replicate the configuration of live master brokers in your idle backup brokers. You also need to configure your backup brokers to accept client connections in the same way as your existing brokers. In a later procedure, you see how to configure an idle backup broker to join an existing master-slave broker group. You must locate the idle backup broker in a separate data center to that of the live master-slave broker group. It is also recommended that you manually start the idle backup broker only in the event of a data center failure. The following figure shows an example topology. Additional resources To learn how to create additional broker instances, see Creating a standalone broker . For information about configuring broker network connections, see Chapter 2, Configuring acceptors and connectors in network connections . 15.5.2. Configuring brokers as Ceph clients When you have added the backup brokers that you need for a fault-tolerant system, you must configure all of the broker servers with the Ceph client role. The client role enable brokers to store data in your Red Hat Ceph Storage cluster. To learn how to configure Ceph clients, see Installing the Ceph Client Role . 15.5.3. Configuring shared store high availability The Red Hat Ceph Storage cluster effectively creates a shared store that is available to brokers in different data centers. To ensure that messages remain available to broker clients in the event of a failure, you configure each broker in your live-backup group to use: The shared store high availability (HA) policy The same journal, paging, and large message directories in the Ceph File System The following procedure shows how to configure the shared store HA policy on the master, slave, and idle backup brokers of your live-backup group. Procedure Edit the broker.xml configuration file of each broker in the live-backup group. Configure each broker to use the same paging, bindings, journal, and large message directories in the Ceph File System. # Master Broker - DC1 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> # Slave Broker - DC1 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> # Backup Broker (Idle) - DC2 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> Configure the backup broker as a master within it's HA policy, as shown below. This configuration setting ensures that the backup broker immediately becomes the master when you manually start it. Because the broker is an idle backup, the failover-on-shutdown parameter that you can specify for an active master broker does not apply in this case. <configuration> <core> ... <ha-policy> <shared-store> <master> </master> </shared-store> </ha-policy> ... </core> </configuration> Additional resources For more information about configuring the shared store high availability policy for live-backup broker groups, see Configuring shared store high availability . 15.6. Configuring clients in a multi-site, fault-tolerant messaging system An internal client application is one that is running on a machine located in the same data center as the broker server. The following figure shows this topology. An external client application is one running on a machine located outside the broker data center. The following figure shows this topology. The following sub-sections describe show examples of configuring your internal and external client applications to connect to a backup broker in another data center in the event of a data center outage. 15.6.1. Configuring internal clients If you experience a data center outage, internal client applications will shut down along with your brokers. To mitigate this situation, you must have another instance of the client application available in a separate data center. In the event of a data center outage, you manually start your backup client to connect to a backup broker that you have also manually started. To enable the backup client to connect to a backup broker, you need to configure the client connection similarly to that of the client in your primary data center. Example A basic connection configuration for an AMQ Core Protocol JMS client to a master-slave broker group is shown below. In this example, host1 and host2 are the host servers for the master and slave brokers. To configure a backup client to connect to a backup broker in the event of a data center outage, use a similar connection configuration, but specify only the host name of your backup broker server. In this example, the backup broker server is host3. Additional resources For more information about configuring broker network connections, see Chapter 2, Configuring acceptors and connectors in network connections . 15.6.2. Configuring external clients To enable an external broker client to continue producing or consuming messaging data in the event of a data center outage, you must configure the client to fail over to a broker in another data center. In the case of a multi-site, fault-tolerant system, you configure the client to fail over to the backup broker that you manually start in the event of an outage. Examples Shown below are examples of configuring the AMQ Core Protocol JMS and AMQ JMS clients to fail over to a backup broker in the event that the primary master-slave group is unavailable. In these examples, host1 and host2 are the host servers for the primary master and slave brokers, while host3 is the host server for the backup broker that you manually start in the event of a data center outage. To configure an AMQ Core Protocol JMS client, include the backup broker on the ordered list of brokers that the client attempts to connect to. To configure an AMQ JMS client, include the backup broker in the failover URI that you configure on the client. Additional resources For more information about configuring failover on: The AMQ Core Protocol JMS client, see Reconnect and failover . The AMQ JMS client, see Failover options . Other supported clients, consult the client-specific documentation in Product Documentation for Red Hat AMQ Clients . 15.7. Verifying storage cluster health during a data center outage When you have configured your Red Hat Ceph Storage cluster for fault tolerance, the cluster continues to run in a degraded state without losing data, even when one of your data centers fails. This procedure shows how to verify the status of your cluster while it runs in a degraded state. Procedure To verify the status of your Ceph storage cluster, use the health or status commands: To watch the ongoing events of the cluster on the command line, open a new terminal. Then, enter: When you run any of the preceding commands, you see output indicating that the storage cluster is still running, but in a degraded state. Specifically, you should see a warning that resembles the following: health: HEALTH_WARN 2 osds down Degraded data redundancy: 42/84 objects degraded (50.0%), 16 pgs unclean, 16 pgs degraded Additional resources For more information about monitoring the health of your Red Hat Ceph Storage cluster, see Monitoring . 15.8. Maintaining messaging continuity during a data center outage The following procedure shows you how to keep brokers and associated messaging data available to clients during a data center outage. Specifically, when a data center fails, you need to: Manually start any idle backup brokers that you created to take over from brokers in your failed data center. Connect internal or external clients to the new active brokers. Prerequisites You must have: Installed and configured a Red Hat Ceph Storage cluster. For more information, see Installing Red Hat Ceph Storage and Configuring a Red Hat Ceph Storage cluster . Mounted the Ceph File System. For more information, see Mounting the Ceph File System on your broker servers . Added idle backup brokers to take over from live brokers in the event of a data center failure. For more information, see Adding backup brokers . Configured your broker servers with the Ceph client role. For more information, see Configuring brokers as Ceph clients . Configured each broker to use the shared store high availability (HA) policy, specifying where in the Ceph File System each broker stores its messaging data . For more information, see Configuring shared store high availability . Configured your clients to connect to backup brokers in the event of a data center outage. For more information, see Configuring clients in a multi-site, fault-tolerant messaging system . Procedure For each master-slave broker pair in the failed data center, manually start the idle backup broker that you added. Reestablish client connections. If you were using an internal client in the failed data center, manually start the backup client that you created. As described in Configuring clients in a multi-site, fault-tolerant messaging system , you must configure the client to connect to the backup broker that you manually started. The following figure shows the new topology. If you have an external client, manually connect the external client to the new active broker or observe that the clients automatically fails over to the new active broker, based on its configuration. For more information, see Configuring external clients . The following figure shows the new topology. 15.9. Restarting a previously failed data center When a previously failed data center is back online, follow these steps to restore the original state of your messaging system: Restart the servers that host the nodes of your Red Hat Ceph Storage cluster Restart the brokers in your messaging system Re-establish connections from your client applications to your restored brokers The following sub-sections show to perform these steps. 15.9.1. Restarting storage cluster servers When you restart Monitor, Metadata Server, Manager, and Object Storage Device (OSD) nodes in a previously failed data center, your Red Hat Ceph Storage cluster self-heals to restore full data redundancy. During this process, Red Hat Ceph Storage automatically backfills data to the restored OSD nodes, as needed. To verify that your storage cluster is automatically self-healing and restoring full data redundancy, use the commands previously shown in Verifying storage cluster health during a data center outage . When you re-execute these commands, you see that the percentage degradation indicated by the HEALTH_WARN message starts to improve until it returns to 100%. 15.9.2. Restarting broker servers The following procedure shows how to restart your broker servers when your storage cluster is no longer operating in a degraded state. Procedure Stop any client applications connected to backup brokers that you manually started when the data center outage occurred. Stop the backup brokers that you manually started. On Linux: <broker_instance_dir> /bin/artemis stop On Windows: <broker_instance_dir> \bin\artemis-service.exe stop In your previously failed data center, restart the original master and slave brokers. On Linux: <broker_instance_dir> /bin/artemis run On Windows: <broker_instance_dir> \bin\artemis-service.exe start The original master broker automatically resumes its role as master when you restart it. 15.9.3. Reestablishing client connections When you have restarted your broker servers, reconnect your client applications to those brokers. The following subsections describe how to reconnect both internal and external client applications. 15.9.3.1. Reconnecting internal clients Internal clients are those running in the same, previously failed data center as the restored brokers. To reconnect internal clients, restart them. Each client application reconnects to the restored master broker that is specified in its connection configuration. For more information about configuring broker network connections, see Chapter 2, Configuring acceptors and connectors in network connections . 15.9.3.2. Reconnecting external clients External clients are those running outside the data center that previously failed. Based on your client type, and the information in Configuring external broker clients , you either configured the client to automatically fail over to a backup broker, or you manually established this connection. When you restore your previously failed data center, you reestablish a connection from your client to the restored master broker in a similar way, as described below. If you configured your external client to automatically fail over to a backup broker, the client automatically fails back to the original master broker when you shut down the backup broker and restart the original master broker. If you manually connected the external client to a backup broker when a data center outage occurred, you must manually reconnect the client to the original master broker that you restart. | [
"ceph osd crush add-bucket dc1 datacenter ceph osd crush add-bucket dc2 datacenter",
"ceph osd crush move host1 datacenter=dc1 ceph osd crush move host2 datacenter=dc1 ceph osd crush move host3 datacenter=dc2 ceph osd crush move host4 datacenter=dc2",
"ceph osd crush move dc1 root=default ceph osd crush move dc2 root=default",
"ceph osd crush rule create-replicated multi-dc default datacenter hdd",
"ceph osd pool set cephfs_data crush_rule multi-dc ceph osd pool set cephfs_metadata crush_rule multi-dc",
"ceph osd pool set cephfs_metadata pg_num 128 ceph osd pool set cephfs_metadata pgp_num 128 ceph osd pool set cephfs_data pg_num 128 ceph osd pool set cephfs_data pgp_num 128",
"ceph osd pool set cephfs_data min_size 1 ceph osd pool set cephfs_metadata min_size 1 ceph osd pool set cephfs_data size 2 ceph osd pool set cephfs_metadata size 2",
"Master Broker - DC1 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> Slave Broker - DC1 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> Backup Broker (Idle) - DC2 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory>",
"<configuration> <core> <ha-policy> <shared-store> <master> </master> </shared-store> </ha-policy> </core> </configuration>",
"<ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(\"(tcp://host1:port,tcp://host2:port)?ha=true&retryInterval=100&retryIntervalMultiplier=1.0&reconnectAttempts=-1\");",
"<ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(\"(tcp://host3:port)?ha=true&retryInterval=100&retryIntervalMultiplier=1.0&reconnectAttempts=-1\");",
"<ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(\"(tcp://host1:port,tcp://host2:port,tcp://host3:port)?ha=true&retryInterval=100&retryIntervalMultiplier=1.0&reconnectAttempts=-1\");",
"failover:(amqp://host1:port,amqp://host2:port,amqp://host3:port)?jms.clientID=myclient&failover.maxReconnectAttempts=20",
"ceph health ceph status",
"ceph -w",
"health: HEALTH_WARN 2 osds down Degraded data redundancy: 42/84 objects degraded (50.0%), 16 pgs unclean, 16 pgs degraded",
"<broker_instance_dir> /bin/artemis stop",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"<broker_instance_dir> /bin/artemis run",
"<broker_instance_dir> \\bin\\artemis-service.exe start"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/configuring_amq_broker/configuring-fault-tolerant-system-configuring |
6.3. Configuration Suggestions | 6.3. Configuration Suggestions Red Hat Enterprise Linux provides a number of tools to assist administrators in configuring the system. This section outlines the available tools and provides examples of how they can be used to solve processor related performance problems in Red Hat Enterprise Linux 7. 6.3.1. Configuring Kernel Tick Time By default, Red Hat Enterprise Linux 7 uses a tickless kernel, which does not interrupt idle CPUs in order to reduce power usage and allow newer processors to take advantage of deep sleep states. Red Hat Enterprise Linux 7 also offers a dynamic tickless option (disabled by default), which is useful for very latency-sensitive workloads, such as high performance computing or realtime computing. To enable dynamic tickless behavior in certain cores, specify those cores on the kernel command line with the nohz_full parameter. On a 16 core system, specifying nohz_full=1-15 enables dynamic tickless behavior on cores 1 through 15, moving all timekeeping to the only unspecified core (core 0). This behavior can be enabled either temporarily at boot time, or persistently via the GRUB_CMDLINE_LINUX option in the /etc/default/grub file. For persistent behavior, run the grub2-mkconfig -o /boot/grub2/grub.cfg command to save your configuration. Enabling dynamic tickless behavior does require some manual administration. When the system boots, you must manually move rcu threads to the non-latency-sensitive core, in this case core 0. Use the isolcpus parameter on the kernel command line to isolate certain cores from user-space tasks. Optionally, set CPU affinity for the kernel's write-back bdi-flush threads to the housekeeping core: Verify that the dynamic tickless configuration is working correctly by executing the following command, where stress is a program that spins on the CPU for 1 second. One possible replacement for stress is a script that runs something like while :; do d=1; done . The default kernel timer configuration shows 1000 ticks on a busy CPU: With the dynamic tickless kernel configured, you should see 1 tick instead: 6.3.2. Setting Hardware Performance Policy (x86_energy_perf_policy) The x86_energy_perf_policy tool allows administrators to define the relative importance of performance and energy efficiency. This information can then be used to influence processors that support this feature when they select options that trade off between performance and energy efficiency. By default, it operates on all processors in performance mode. It requires processor support, which is indicated by the presence of CPUID.06H.ECX.bit3 , and must be run with root privileges. x86_energy_perf_policy is provided by the kernel-tools package. For details of how to use x86_energy_perf_policy , see Section A.9, "x86_energy_perf_policy" or refer to the man page: 6.3.3. Setting Process Affinity with taskset The taskset tool is provided by the util-linux package. Taskset allows administrators to retrieve and set the processor affinity of a running process, or launch a process with a specified processor affinity. Important taskset does not guarantee local memory allocation. If you require the additional performance benefits of local memory allocation, Red Hat recommends using numactl instead of taskset . For more information about taskset , see Section A.15, "taskset" or the man page: 6.3.4. Managing NUMA Affinity with numactl Administrators can use numactl to run a process with a specified scheduling or memory placement policy. Numactl can also set a persistent policy for shared memory segments or files, and set the processor affinity and memory affinity of a process. In a system with NUMA topology, a processor's memory access slows as the distance between the processor and the memory bank increases. Therefore, it is important to configure applications that are sensitive to performance so that they allocate memory from the closest possible memory bank. It is best to use memory and CPUs that are in the same NUMA node. Multi-threaded applications that are sensitive to performance may benefit from being configured to execute on a specific NUMA node rather than a specific processor. Whether this is suitable depends on your system and the requirements of your application. If multiple application threads access the same cached data, then configuring those threads to execute on the same processor may be suitable. However, if multiple threads that access and cache different data execute on the same processor, each thread may evict cached data accessed by a thread. This means that each thread 'misses' the cache, and wastes execution time fetching data from memory and replacing it in the cache. You can use the perf tool, as documented in Section A.6, "perf" , to check for an excessive number of cache misses. Numactl provides a number of options to assist you in managing processor and memory affinity. See Section A.11, "numastat" or the man page for details: Note The numactl package includes the libnuma library. This library offers a simple programming interface to the NUMA policy supported by the kernel, and can be used for more fine-grained tuning than the numactl application. For more information, see the man page: 6.3.5. Automatic NUMA Affinity Management with numad numad is an automatic NUMA affinity management daemon. It monitors NUMA topology and resource usage within a system in order to dynamically improve NUMA resource allocation and management. numad also provides a pre-placement advice service that can be queried by various job management systems to provide assistance with the initial binding of CPU and memory resources for their processes. This pre-placement advice is available regardless of whether numad is running as an executable or a service. For details of how to use numad , see Section A.13, "numad" or refer to the man page: 6.3.6. Tuning Scheduling Policy The Linux scheduler implements a number of scheduling policies, which determine where and for how long a thread runs. There are two major categories of scheduling policies: normal policies and realtime policies. Normal threads are used for tasks of normal priority. Realtime policies are used for time-sensitive tasks that must complete without interruptions. Realtime threads are not subject to time slicing. This means they will run until they block, exit, voluntarily yield, or are pre-empted by a higher priority thread. The lowest priority realtime thread is scheduled before any thread with a normal policy. 6.3.6.1. Scheduling Policies 6.3.6.1.1. Static Priority Scheduling with SCHED_FIFO SCHED_FIFO (also called static priority scheduling) is a realtime policy that defines a fixed priority for each thread. This policy allows administrators to improve event response time and reduce latency, and is recommended for time sensitive tasks that do not run for an extended period of time. When SCHED_FIFO is in use, the scheduler scans the list of all SCHED_FIFO threads in priority order and schedules the highest priority thread that is ready to run. The priority level of a SCHED_FIFO thread can be any integer from 1 to 99, with 99 treated as the highest priority. Red Hat recommends starting at a low number and increasing priority only when you identify latency issues. Warning Because realtime threads are not subject to time slicing, Red Hat does not recommend setting a priority of 99. This places your process at the same priority level as migration and watchdog threads; if your thread goes into a computational loop and these threads are blocked, they will not be able to run. Systems with a single processor will eventually hang in this situation. Administrators can limit SCHED_FIFO bandwidth to prevent realtime application programmers from initiating realtime tasks that monopolize the processor. /proc/sys/kernel/sched_rt_period_us This parameter defines the time period in microseconds that is considered to be one hundred percent of processor bandwidth. The default value is 1000000 ms, or 1 second. /proc/sys/kernel/sched_rt_runtime_us This parameter defines the time period in microseconds that is devoted to running realtime threads. The default value is 950000 ms, or 0.95 seconds. 6.3.6.1.2. Round Robin Priority Scheduling with SCHED_RR SCHED_RR is a round-robin variant of SCHED_FIFO . This policy is useful when multiple threads need to run at the same priority level. Like SCHED_FIFO , SCHED_RR is a realtime policy that defines a fixed priority for each thread. The scheduler scans the list of all SCHED_RR threads in priority order and schedules the highest priority thread that is ready to run. However, unlike SCHED_FIFO , threads that have the same priority are scheduled round-robin style within a certain time slice. You can set the value of this time slice in milliseconds with the sched_rr_timeslice_ms kernel parameter ( /proc/sys/kernel/sched_rr_timeslice_ms ). The lowest value is 1 millisecond. 6.3.6.1.3. Normal Scheduling with SCHED_OTHER SCHED_OTHER is the default scheduling policy in Red Hat Enterprise Linux 7. This policy uses the Completely Fair Scheduler (CFS) to allow fair processor access to all threads scheduled with this policy. This policy is most useful when there are a large number of threads or data throughput is a priority, as it allows more efficient scheduling of threads over time. When this policy is in use, the scheduler creates a dynamic priority list based partly on the niceness value of each process thread. Administrators can change the niceness value of a process, but cannot change the scheduler's dynamic priority list directly. For details about changing process niceness, see the Red Hat Enterprise Linux 7 System Administrator's Guide . 6.3.6.2. Isolating CPUs You can isolate one or more CPUs from the scheduler with the isolcpus boot parameter. This prevents the scheduler from scheduling any user-space threads on this CPU. Once a CPU is isolated, you must manually assign processes to the isolated CPU, either with the CPU affinity system calls or the numactl command. To isolate the third and sixth to eighth CPUs on your system, add the following to the kernel command line: You can also use the Tuna tool to isolate a CPU. Tuna can isolate a CPU at any time, not just at boot time. However, this method of isolation is subtly different from the isolcpus parameter, and does not currently achieve the performance gains associated with isolcpus . See Section 6.3.8, "Configuring CPU, Thread, and Interrupt Affinity with Tuna" for more details about this tool. 6.3.7. Setting Interrupt Affinity on AMD64 and Intel 64 Interrupt requests have an associated affinity property, smp_affinity , which defines the processors that will handle the interrupt request. To improve application performance, assign interrupt affinity and process affinity to the same processor, or processors on the same core. This allows the specified interrupt and application threads to share cache lines. Important This section covers only the AMD64 and Intel 64 architecture. Interrupt affinity configuration is significantly different on other architectures. Procedure 6.1. Balancing Interrupts Automatically If your BIOS exports its NUMA topology, the irqbalance service can automatically serve interrupt requests on the node that is local to the hardware requesting service. For details on configuring irqbalance , see Section A.1, "irqbalance" . Procedure 6.2. Balancing Interrupts Manually Check which devices correspond to the interrupt requests that you want to configure. Starting with Red Hat Enterprise Linux 7.5, the system configures the optimal interrupt affinity for certain devices and their drivers automatically. You can no longer configure their affinity manually. This applies to the following devices: Devices using the be2iscsi driver NVMe PCI devices Find the hardware specification for your platform. Check if the chipset on your system supports distributing interrupts. If it does, you can configure interrupt delivery as described in the following steps. Additionally, check which algorithm your chipset uses to balance interrupts. Some BIOSes have options to configure interrupt delivery. If it does not, your chipset will always route all interrupts to a single, static CPU. You cannot configure which CPU is used. Check which Advanced Programmable Interrupt Controller (APIC) mode is in use on your system. Only non-physical flat mode ( flat ) supports distributing interrupts to multiple CPUs. This mode is available only for systems that have up to 8 CPUs. In the command output: If your system uses a mode other than flat , you can see a line similar to Setting APIC routing to physical flat . If you can see no such message, your system uses flat mode. If your system uses x2apic mode, you can disable it by adding the nox2apic option to the kernel command line in the bootloader configuration. Calculate the smp_affinity mask. The smp_affinity value is stored as a hexadecimal bit mask representing all processors in the system. Each bit configures a different CPU. The least significant bit is CPU 0. The default value of the mask is f , meaning that an interrupt request can be handled on any processor in the system. Setting this value to 1 means that only processor 0 can handle the interrupt. Procedure 6.3. Calculating the Mask In binary, use the value 1 for CPUs that will handle the interrupts. For example, to handle interrupts by CPU 0 and CPU 7, use 0000000010000001 as the binary code: Table 6.1. Binary Bits for CPUs CPU 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Binary 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 Convert the binary code to hexadecimal. For example, to convert the binary code using Python: On systems with more than 32 processors, you must delimit smp_affinity values for discrete 32 bit groups. For example, if you want only the first 32 processors of a 64 processor system to service an interrupt request, use 0xffffffff,00000000 . Set the smp_affinity mask. The interrupt affinity value for a particular interrupt request is stored in the associated /proc/irq/ irq_number /smp_affinity file. Write the calculated mask to the associated file: Additional Resources On systems that support interrupt steering, modifying the smp_affinity property of an interrupt request sets up the hardware so that the decision to service an interrupt with a particular processor is made at the hardware level with no intervention from the kernel. For more information about interrupt steering, see Chapter 9, Networking . 6.3.8. Configuring CPU, Thread, and Interrupt Affinity with Tuna Tuna is a tool for tuning running processes and can control CPU, thread, and interrupt affinity, and also provides a number of actions for each type of entity it can control. For information about Tuna , see Chapter 4, Tuna . | [
"for i in `pgrep rcu[^c]` ; do taskset -pc 0 USDi ; done",
"echo 1 > /sys/bus/workqueue/devices/writeback/cpumask",
"perf stat -C 1 -e irq_vectors:local_timer_entry taskset -c 1 stress -t 1 -c 1",
"perf stat -C 1 -e irq_vectors:local_timer_entry taskset -c 1 stress -t 1 -c 1 1000 irq_vectors:local_timer_entry",
"perf stat -C 1 -e irq_vectors:local_timer_entry taskset -c 1 stress -t 1 -c 1 1 irq_vectors:local_timer_entry",
"man x86_energy_perf_policy",
"man taskset",
"man numactl",
"man numa",
"man numad",
"isolcpus=2,5-7",
"journalctl --dmesg | grep APIC",
">>> hex(int('0000000010000001', 2)) '0x81'",
"echo mask > /proc/irq/ irq_number /smp_affinity"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-cpu-configuration_suggestions |
Appendix C. Revision History | Appendix C. Revision History 0.1-2 Fri Apr 28 2023, Lucie Varakova ( [email protected] ) Added a known issue (Authentication and Interoperability). 0.1-1 Tue Mar 02 2021, Lenka Spackova ( [email protected] ) Updated a link to Upgrading from RHEL 6 to RHEL 7 . Fixed CentOS Linux name. 0.1-0 Thu Jun 25 2020, Jaroslav Klech ( [email protected] ) Various improvements to Device Drivers chapter. 0.0-9 Thu Jun 18 2020, Lenka Spackova ( [email protected] ) Added a known issue related to Audit (Security). 0.0-8 Thu May 28 2020, Lenka Spackova ( [email protected] ) Added the qlcnic and qlge drivers to the list of deprecated drivers. 0.0-7 Tue May 05 2020, Lenka Spackova ( [email protected] ) Added an enhancement related to GNOME. 0.0-6 Tue Apr 28 2020, Lenka Spackova ( [email protected] ) Updated information about in-place upgrades in Overview. 0.0-5 Fri Apr 3 2020, Lenka Spackova ( [email protected] ) Added a known issue (Networking). Added a bug fix (Authentication and Interoperability). Added a deprecated functionality ( NSS SEED ciphers). 0.0-4 Wed Apr 1 2020, Lenka Spackova ( [email protected] ) Aero adapters are now fully supported (Hardware Enablement). Added a known issue (Servers and Services). 0.0-3 Tue Mar 31 2020, Lenka Spackova ( [email protected] ) Release of the Red Hat Enterprise Linux 7.8 Release Notes. 0.0-2 Wed Feb 12 2020, Lenka Spackova ( [email protected] ) Provided a complete kernel version to Architectures and New Features chapters. Various additions and updates to individual release note descriptions. 0.0-1 Mon Feb 03 2020, Lenka Spackova ( [email protected] ) Added Important Changes to External Kernel Parameters. Added Device Drivers. Various additions and updates to individual release note descriptions. 0.0-0 Tue Oct 29 2019, Lenka Spackova ( [email protected] ) Release of the Red Hat Enterprise Linux 7.8 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.8_release_notes/revision_history |
Chapter 7. Checking for Local Storage Operator deployments | Chapter 7. Checking for Local Storage Operator deployments Red Hat OpenShift Data Foundation clusters with Local Storage Operator are deployed using local storage devices. To find out if your existing cluster with OpenShift Data Foundation was deployed using local storage devices, use the following procedure: Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure By checking the storage class associated with your OpenShift Data Foundation cluster's persistent volume claims (PVCs), you can tell if your cluster was deployed using local storage devices. Check the storage class associated with OpenShift Data Foundation cluster's PVCs with the following command: Check the output. For clusters with Local Storage Operator, the PVCs associated with ocs-deviceset use the storage class localblock . The output looks similar to the following: Additional Resources Deploying OpenShift Data Foundation using local storage devices on VMware Deploying OpenShift Data Foundation using local storage devices on Red Hat Virtualization Deploying OpenShift Data Foundation using local storage devices on bare metal Deploying OpenShift Data Foundation using local storage devices on IBM Power | [
"oc get pvc -n openshift-storage",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-d96c747b-2ab5-47e2-b07e-1079623748d8 50Gi RWO ocs-storagecluster-ceph-rbd 114s ocs-deviceset-0-0-lzfrd Bound local-pv-7e70c77c 1769Gi RWO localblock 2m10s ocs-deviceset-1-0-7rggl Bound local-pv-b19b3d48 1769Gi RWO localblock 2m10s ocs-deviceset-2-0-znhk8 Bound local-pv-e9f22cdc 1769Gi RWO localblock 2m10s"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/troubleshooting_openshift_data_foundation/checking-for-local-storage-operator-deployments_rhodf |
C.4. Further Information | C.4. Further Information For more details on perspectives, views and other Eclipse workbench details, see formal Eclipse Documentation . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/further_information |
5.7. Adding Cluster Resources | 5.7. Adding Cluster Resources To specify a device for a cluster service, follow these steps: On the Resources property of the Cluster Configuration Tool , click the Create a Resource button. Clicking the Create a Resource button causes the Resource Configuration dialog box to be displayed. At the Resource Configuration dialog box, under Select a Resource Type , click the drop-down box. At the drop-down box, select a resource to configure. The resource options are described as follows: GFS Name - Create a name for the file system resource. Mount Point - Choose the path to which the file system resource is mounted. Device - Specify the device file associated with the file system resource. Options - Mount options. File System ID - When creating a new file system resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you click OK at the Resource Configuration dialog box. If you need to assign a file system ID explicitly, specify it in this field. Force Unmount checkbox - If checked, forces the file system to unmount. The default setting is unchecked. Force Unmount kills all processes using the mount point to free up the mount when it tries to unmount. With GFS resources, the mount point is not unmounted at service tear-down unless this box is checked. File System Name - Create a name for the file system resource. File System Type - Choose the file system for the resource using the drop-down menu. Mount Point - Choose the path to which the file system resource is mounted. Device - Specify the device file associated with the file system resource. Options - Mount options. File System ID - When creating a new file system resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you click OK at the Resource Configuration dialog box. If you need to assign a file system ID explicitly, specify it in this field. Checkboxes - Specify mount and unmount actions when a service is stopped (for example, when disabling or relocating a service): Force unmount - If checked, forces the file system to unmount. The default setting is unchecked. Force Unmount kills all processes using the mount point to free up the mount when it tries to unmount. Reboot host node if unmount fails - If checked, reboots the node if unmounting this file system fails. The default setting is unchecked. Check file system before mounting - If checked, causes fsck to be run on the file system before mounting it. The default setting is unchecked. IP Address IP Address - Type the IP address for the resource. Monitor Link checkbox - Check the box to enable or disable link status monitoring of the IP address resource NFS Mount Name - Create a symbolic name for the NFS mount. Mount Point - Choose the path to which the file system resource is mounted. Host - Specify the NFS server name. Export Path - NFS export on the server. NFS and NFS4 options - Specify NFS protocol: NFS - Specifies using NFSv3 protocol. The default setting is NFS . NFS4 - Specifies using NFSv4 protocol. Options - Mount options. For more information, refer to the nfs (5) man page. Force Unmount checkbox - If checked, forces the file system to unmount. The default setting is unchecked. Force Unmount kills all processes using the mount point to free up the mount when it tries to unmount. NFS Client Name - Enter a name for the NFS client resource. Target - Enter a target for the NFS client resource. Supported targets are hostnames, IP addresses (with wild-card support), and netgroups. Read-Write and Read Only options - Specify the type of access rights for this NFS client resource: Read-Write - Specifies that the NFS client has read-write access. The default setting is Read-Write . Read Only - Specifies that the NFS client has read-only access. Options - Additional client access rights. For more information, refer to the exports (5) man page, General Options NFS Export Name - Enter a name for the NFS export resource. Script Name - Enter a name for the custom user script. File (with path) - Enter the path where this custom script is located (for example, /etc/init.d/ userscript ) Samba Service Name - Enter a name for the Samba server. Workgroup - Enter the Windows workgroup name or Windows NT domain of the Samba service. Note When creating or editing a cluster service, connect a Samba-service resource directly to the service, not to a resource within a service. That is, at the Service Management dialog box, use either Create a new resource for this service or Add a Shared Resource to this service ; do not use Attach a new Private Resource to the Selection or Attach a Shared Resource to the selection . When finished, click OK . Choose File => Save to save the change to the /etc/cluster/cluster.conf configuration file. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-config-service-dev-CA |
Chapter 10. Troubleshooting | Chapter 10. Troubleshooting Pods can restart for a number of reasons, but a common cause of JBoss EAP pod restarts might include OpenShift resource constraints, especially out-of-memory issues. See the OpenShift documentation for more information on OpenShift pod eviction . 10.1. Troubleshooting Pod restarts By default, JBoss EAP for OpenShift templates are configured to automatically restart affected containers when they encounter situations like out-of-memory issues. The following steps can help you diagnose and troubleshoot out-of-memory and other pod restart issues. Get the name of the pod that has been having trouble. You can see pod names, as well as the number times each pod has restarted with the following command. To diagnose why a pod has restarted, you can examine the JBoss EAP logs of the pod, or the OpenShift events. To see the JBoss EAP logs of the pod, use the following command. To see the OpenShift events, use the following command. If a pod has restarted because of a resource issue, you can attempt to modify your OpenShift pod configuration to increase its resource requests and limits . See the OpenShift documentation for more information on configuring pod compute resources . 10.2. Troubleshooting using the JBoss EAP management CLI The JBoss EAP management CLI, EAP_HOME /bin/jboss-cli.sh , is accessible from within a container for troubleshooting purposes. Important It is not recommended to make configuration changes in a running pod using the JBoss EAP management CLI. Any configuration changes made using the management CLI in a running container will be lost when the container restarts. To make configuration changes to JBoss EAP for OpenShift, see Configuring your JBoss EAP server and application . First open a remote shell session to the running pod. Run the following command from the remote shell session to launch the JBoss EAP management CLI: 10.3. Troubleshooting errors when updating Helm Chart from version 1.0.0 to 1.1.0 on JBoss EAP 8 There may be errors when upgrading Helm Chart to the latest version on JBoss EAP 8. If you modify the immutable field before upgrading Helm Chart, the following error message may be displayed during the upgrade: UPGRADE FAILED: cannot patch "<helm-release-name>" with kind Deployment: Deployment.apps "<helm-release-name>" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"<helm-release-name>", "app.kubernetes.io/name":"<helm-release-name>"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable To resolve this error, delete the deployment resource by running the command oc delete deployment <helm-release-name> before running the command helm upgrade <helm-release-name> . | [
"oc get pods",
"logs --previous POD_NAME",
"oc get events",
"oc rsh POD_NAME",
"/opt/server/bin/jboss-cli.sh",
"UPGRADE FAILED: cannot patch \"<helm-release-name>\" with kind Deployment: Deployment.apps \"<helm-release-name>\" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{\"app.kubernetes.io/instance\":\"<helm-release-name>\", \"app.kubernetes.io/name\":\"<helm-release-name>\"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_on_openshift_container_platform/assembly_troubleshooting_default |
Chapter 4. Modifying a compute machine set | Chapter 4. Modifying a compute machine set You can modify a compute machine set, such as adding labels, changing the instance type, or changing block storage. Note If you need to scale a compute machine set without making other changes, see Manually scaling a compute machine set . 4.1. Modifying a compute machine set by using the CLI You can modify the configuration of a compute machine set, and then propagate the changes to the machines in your cluster by using the CLI. By updating the compute machine set configuration, you can enable features or change the properties of the machines it creates. When you modify a compute machine set, your changes only apply to compute machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines. Note Changes made in the underlying cloud provider are not reflected in the Machine or MachineSet CRs. To adjust instance configuration in cluster-managed infrastructure, use the cluster-side resources. You can replace the existing machines with new ones that reflect the updated configuration by scaling the compute machine set to create twice the number of replicas and then scaling it down to the original number of replicas. If you need to scale a compute machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on compute machines. Because the router is required to access some cluster resources, including the web console, do not scale the compute machine set to 0 unless you first relocate the router pods. The output examples in this procedure use the values for an AWS cluster. Prerequisites Your OpenShift Container Platform cluster uses the Machine API. You are logged in to the cluster as an administrator by using the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m Edit a compute machine set by running the following command: USD oc edit machinesets.machine.openshift.io <machine_set_name> \ -n openshift-machine-api Note the value of the spec.replicas field, because you need it when scaling the machine set to apply the changes. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1 # ... 1 The examples in this procedure show a compute machine set that has a replicas value of 2 . Update the compute machine set CR with the configuration options that you want and save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machine.machine.openshift.io/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=4 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 is doubled to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=2 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 . Verification To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine.machine.openshift.io <machine_name_updated_1> \ -n openshift-machine-api To verify that the compute machines without the updated configuration are deleted, list the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output while deletion is in progress for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s Example output when deletion is complete for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s Additional resources Lifecycle hooks for the machine deletion phase Scaling a compute machine set manually Controlling pod placement using the scheduler | [
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m",
"oc edit machinesets.machine.openshift.io <machine_set_name> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h",
"oc annotate machine.machine.openshift.io/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=4 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s",
"oc scale --replicas=2 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api",
"oc describe machine.machine.openshift.io <machine_name_updated_1> -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_management/modifying-machineset |
Chapter 4. OCI referrers OAuth access token | Chapter 4. OCI referrers OAuth access token In some cases, depending on the features that your Red Hat Quay deployment is configured to use, you might need to leverage an OCI referrers OAuth access token . OCI referrers OAuth access tokens are used to list OCI referrers of a manifest under a repository, and uses a curl command to make a GET request to the Red Hat Quay v2/auth endpoint. These tokens are obtained via basic HTTP authentication, wherein the user provides a username and password encoded in Base64 to authenticate directly with the v2/auth API endpoint. As such, they are based directly on the user's credentials aod do not follow the same detailed authorization flow as OAuth 2, but still allow a user to authorize API requests. OCI referrers OAuth access tokens do not offer scope-based permissions and do not expire. They are solely used to list OCI referrers of a manifest under a repository. Additional resource Attaching referrers to an image tag 4.1. Creating an OCI referrers OAuth access token This OCI referrers OAuth access token is used to list OCI referrers of a manifest under a repository. Procedure Update your config.yaml file to include the FEATURE_REFERRERS_API: true field. For example: # ... FEATURE_REFERRERS_API: true # ... Enter the following command to Base64 encode your credentials: USD echo -n '<username>:<password>' | base64 Example output abcdeWFkbWluOjE5ODlraWROZXQxIQ== Enter the following command to use the base64 encoded string and modify the URL endpoint to your Red Hat Quay server: USD curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq Example output { "token": "<example_secret> } | [
"FEATURE_REFERRERS_API: true",
"echo -n '<username>:<password>' | base64",
"abcdeWFkbWluOjE5ODlraWROZXQxIQ==",
"curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq",
"{ \"token\": \"<example_secret> }"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_api_guide/oci-referrers-oauth-access-token |
Chapter 7. Troubleshooting upgrade error messages | Chapter 7. Troubleshooting upgrade error messages The following table shows some cephadm upgrade error messages. If the cephadm upgrade fails for any reason, an error message appears in the storage cluster health status. Error Message Description UPGRADE_NO_STANDBY_MGR Ceph requires both active and standby manager daemons to proceed, but there is currently no standby. UPGRADE_FAILED_PULL Ceph was unable to pull the container image for the target version. This can happen if you specify a version or container image that does not exist (e.g., 1.2.3), or if the container registry is not reachable from one or more hosts in the cluster. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/upgrade_guide/troubleshooting-upgrade-error-messages_upgrade |
14.7. NUMA Node Management | 14.7. NUMA Node Management This section contains the commands needed for NUMA node management. 14.7.1. Displaying Node Information The nodeinfo command displays basic information about the node, including the model number, number of CPUs, type of CPU, and size of the physical memory. The output corresponds to virNodeInfo structure. Specifically, the "CPU socket(s)" field indicates the number of CPU sockets per NUMA cell. | [
"virsh nodeinfo CPU model: x86_64 CPU(s): 4 CPU frequency: 1199 MHz CPU socket(s): 1 Core(s) per socket: 2 Thread(s) per core: 2 NUMA cell(s): 1 Memory size: 3715908 KiB"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_guest_virtual_machines_with_virsh-numa_node_management |
30.6.4. Configuring the YABOOT Boot Loader | 30.6.4. Configuring the YABOOT Boot Loader IBM eServer System p uses YABOOT as its boot loader. YABOOT uses /etc/yaboot.conf as its configuration file. Confirm that the file contains an image section with the same version as the kernel package just installed, and likewise for the initramfs image: Notice that the default is not set to the new kernel. The kernel in the first image is booted by default. To change the default kernel to boot either move its image stanza so that it is the first one listed or add the directive default and set it to the label of the image stanza that contains the new kernel. Begin testing the new kernel by rebooting the computer and watching the messages to ensure that the hardware is detected properly. | [
"boot=/dev/sda1 init-message=Welcome to Red Hat Enterprise Linux! Hit <TAB> for boot options partition=2 timeout=30 install=/usr/lib/yaboot/yaboot delay=10 nonvram image=/vmlinuz-2.6.32-17.EL label=old read-only initrd=/initramfs-2.6.32-17.EL.img append=\"root=LABEL=/\" image=/vmlinuz-2.6.32-19.EL label=linux read-only initrd=/initramfs-2.6.32-19.EL.img append=\"root=LABEL=/\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-kernel-boot-loader-pseries |
8.3.4. VLAD the Scanner | 8.3.4. VLAD the Scanner VLAD is a vulnerabilities scanner developed by the RAZOR team at Bindview, Inc., which checks for the SANS Top Ten list of common security issues (SNMP issues, file sharing issues, etc.). While not as full-featured as Nessus, VLAD is worth investigating. Note VLAD is not included with Red Hat Enterprise Linux and is not supported. It has been included in this document as a reference to users who may be interested in using this popular application. More information about VLAD can be found on the RAZOR team website at the following URL: http://www.bindview.com/Support/Razor/Utilities/ | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-vuln-tools-vlad |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/rhdg-docs_datagrid |
20.2. Installing in an LPAR | 20.2. Installing in an LPAR When installing in a logical partition (LPAR), you can boot from: an FTP server the DVD drive of the HMC or SE a DASD or an FCP-attached SCSI drive prepared with the zipl boot loader an FCP-attached SCSI DVD drive Perform these common steps first: Log in on the IBM System z Hardware Management Console (HMC) or the Support Element (SE) as a user with sufficient privileges to install a new operating system to an LPAR. The SYSPROG user is recommended. Select Images , then select the LPAR to which you wish to install. Use the arrows in the frame on the right side to navigate to the CPC Recovery menu. Double-click Operating System Messages to show the text console on which Linux boot messages will appear and potentially user input will be required. Refer to the chapter on booting Linux in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 and the Hardware Management Console Operations Guide , order number [ SC28-6857 ], for details. Continue with the procedure for your installation source. 20.2.1. Using an FTP Server Double-click Load from CD-ROM, DVD, or Server . In the dialog box that follows, select FTP Source , and enter the following information: Host Computer: Hostname or IP address of the FTP server you wish to install from (for example, ftp.redhat.com) User ID: Your user name on the FTP server (or anonymous) Password: Your password (use your email address if you are logging in as anonymous) Account (optional): Leave this field empty File location (optional): Directory on the FTP server holding Red Hat Enterprise Linux for System z (for example, /rhel/s390x/) Click Continue . In the dialog that follows, keep the default selection of generic.ins and click Continue . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-s390-steps-boot-installing_in_an_lpar |
Chapter 6. Configuring persistent storage | Chapter 6. Configuring persistent storage Data Grid uses cache stores and loaders to interact with persistent storage. Durability Adding cache stores allows you to persist data to non-volatile storage so it survives restarts. Write-through caching Configuring Data Grid as a caching layer in front of persistent storage simplifies data access for applications because Data Grid handles all interactions with the external storage. Data overflow Using eviction and passivation techniques ensures that Data Grid keeps only frequently used data in-memory and writes older entries to persistent storage. 6.1. Passivation Passivation configures Data Grid to write entries to cache stores when it evicts those entries from memory. In this way, passivation prevents unnecessary and potentially expensive writes to persistent storage. Activation is the process of restoring entries to memory from the cache store when there is an attempt to access passivated entries. For this reason, when you enable passivation, you must configure cache stores that implement both CacheWriter and CacheLoader interfaces so they can write and load entries from persistent storage. When Data Grid evicts an entry from the cache, it notifies cache listeners that the entry is passivated then stores the entry in the cache store. When Data Grid gets an access request for an evicted entry, it lazily loads the entry from the cache store into memory and then notifies cache listeners that the entry is activated while keeping the value still in the store. Note Passivation uses the first cache loader in the Data Grid configuration and ignores all others. Passivation is not supported with: Transactional stores. Passivation writes and removes entries from the store outside the scope of the actual Data Grid commit boundaries. Shared stores. Shared cache stores require entries to always exist in the store for other owners. For this reason, passivation is not supported because entries cannot be removed. If you enable passivation with transactional stores or shared stores, Data Grid throws an exception. 6.1.1. How passivation works Passivation disabled Writes to data in memory result in writes to persistent storage. If Data Grid evicts data from memory, then data in persistent storage includes entries that are evicted from memory. In this way persistent storage is a superset of the in-memory cache. This is recommended when you require highest consistency as the store will be able to be read again after a crash. If you do not configure eviction, then data in persistent storage provides a copy of data in memory. Passivation enabled Data Grid adds data to persistent storage only when it evicts data from memory, an entry is removed or upon shutting down the node. When Data Grid activates entries, it restores data in memory but keeps the data in the store still. This allows for writes to be just as fast as without a store, and still maintains consistency. When an entry is created or updated only the in memory will be updated and thus the store will be outdated for the time being. Note Passivation is not supported when a store is also configured as shared. This is due to entries can become out of sync between nodes depending on when a write is evicted versus read. To gurantee data consistency any store that is not shared should always have purgeOnStartup enabled. This is true for both passivation enabled or disabled since a store could hold an outdated entry while down and resurrect it at a later point. The following table shows data in memory and in persistent storage after a series of operations: Operation Passivation disabled Passivation enabled Insert k1. Memory: k1 Disk: k1 Memory: k1 Disk: - Insert k2. Memory: k1, k2 Disk: k1, k2 Memory: k1, k2 Disk: - Eviction thread runs and evicts k1. Memory: k2 Disk: k1, k2 Memory: k2 Disk: k1 Read k1. Memory: k1, k2 Disk: k1, k2 Memory: k1, k2 Disk: k1 Eviction thread runs and evicts k2. Memory: k1 Disk: k1, k2 Memory: k1 Disk: k1, k2 Remove k2. Memory: k1 Disk: k1 Memory: k1 Disk: k1 6.2. Write-through cache stores Write-through is a cache writing mode where writes to memory and writes to cache stores are synchronous. When a client application updates a cache entry, in most cases by invoking Cache.put() , Data Grid does not return the call until it updates the cache store. This cache writing mode results in updates to the cache store concluding within the boundaries of the client thread. The primary advantage of write-through mode is that the cache and cache store are updated simultaneously, which ensures that the cache store is always consistent with the cache. However, write-through mode can potentially decrease performance because the need to access and update cache stores directly adds latency to cache operations. Write-through configuration Data Grid uses write-through mode unless you explicitly add write-behind configuration to your caches. There is no separate element or method for configuring write-through mode. For example, the following configuration adds a file-based store to the cache that implicitly uses write-through mode: <distributed-cache> <persistence passivation="false"> <file-store> <index path="path/to/index" /> <data path="path/to/data" /> </file-store> </persistence> </distributed-cache> 6.3. Write-behind cache stores Write-behind is a cache writing mode where writes to memory are synchronous and writes to cache stores are asynchronous. When clients send write requests, Data Grid adds those operations to a modification queue. Data Grid processes operations as they join the queue so that the calling thread is not blocked and the operation completes immediately. If the number of write operations in the modification queue increases beyond the size of the queue, Data Grid adds those additional operations to the queue. However, those operations do not complete until Data Grid processes operations that are already in the queue. For example, calling Cache.putAsync returns immediately and the Stage also completes immediately if the modification queue is not full. If the modification queue is full, or if Data Grid is currently processing a batch of write operations, then Cache.putAsync returns immediately and the Stage completes later. Write-behind mode provides a performance advantage over write-through mode because cache operations do not need to wait for updates to the underlying cache store to complete. However, data in the cache store remains inconsistent with data in the cache until the modification queue is processed. For this reason, write-behind mode is suitable for cache stores with low latency, such as unshared and local file-based cache stores, where the time between the write to the cache and the write to the cache store is as small as possible. Write-behind configuration XML <distributed-cache> <persistence> <table-jdbc-store xmlns="urn:infinispan:config:store:sql:14.0" dialect="H2" shared="true" table-name="books"> <connection-pool connection-url="jdbc:h2:mem:infinispan" username="sa" password="changeme" driver="org.h2.Driver"/> <write-behind modification-queue-size="2048" fail-silently="true"/> </table-jdbc-store> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence" : { "table-jdbc-store": { "dialect": "H2", "shared": "true", "table-name": "books", "connection-pool": { "connection-url": "jdbc:h2:mem:infinispan", "driver": "org.h2.Driver", "username": "sa", "password": "changeme" }, "write-behind" : { "modification-queue-size" : "2048", "fail-silently" : true } } } } } YAML distributedCache: persistence: tableJdbcStore: dialect: "H2" shared: "true" tableName: "books" connectionPool: connectionUrl: "jdbc:h2:mem:infinispan" driver: "org.h2.Driver" username: "sa" password: "changeme" writeBehind: modificationQueueSize: "2048" failSilently: "true" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence() .async() .modificationQueueSize(2048) .failSilently(true); Failing silently Write-behind configuration includes a fail-silently parameter that controls what happens when either the cache store is unavailable or the modification queue is full. If fail-silently="true" then Data Grid logs WARN messages and rejects write operations. If fail-silently="false" then Data Grid throws exceptions if it detects the cache store is unavailable during a write operation. Likewise if the modification queue becomes full, Data Grid throws an exception. In some cases, data loss can occur if Data Grid restarts and write operations exist in the modification queue. For example the cache store goes offline but, during the time it takes to detect that the cache store is unavailable, write operations are added to the modification queue because it is not full. If Data Grid restarts or otherwise becomes unavailable before the cache store comes back online, then the write operations in the modification queue are lost because they were not persisted. 6.4. Segmented cache stores Cache stores can organize data into hash space segments to which keys map. Segmented stores increase read performance for bulk operations; for example, streaming over data ( Cache.size , Cache.entrySet.stream ), pre-loading the cache, and doing state transfer operations. However, segmented stores can also result in loss of performance for write operations. This performance loss applies particularly to batch write operations that can take place with transactions or write-behind stores. For this reason, you should evaluate the overhead for write operations before you enable segmented stores. The performance gain for bulk read operations might not be acceptable if there is a significant performance loss for write operations. Important The number of segments you configure for cache stores must match the number of segments you define in the Data Grid configuration with the clustering.hash.numSegments parameter. If you change the numSegments parameter in the configuration after you add a segmented cache store, Data Grid cannot read data from that cache store. 6.5. Shared cache stores Data Grid cache stores can be local to a given node or shared across all nodes in the cluster. By default, cache stores are local ( shared="false" ). Local cache stores are unique to each node; for example, a file-based cache store that persists data to the host filesystem. Local cache stores should use "purge on startup" to avoid loading stale entries from persistent storage. Shared cache stores allow multiple nodes to use the same persistent storage; for example, a JDBC cache store that allows multiple nodes to access the same database. Shared cache stores ensure that only the primary owner write to persistent storage, instead of backup nodes performing write operations for every modification. Important Purging deletes data, which is not typically the desired behavior for persistent storage. Local cache store <persistence> <store shared="false" purge="true"/> </persistence> Shared cache store <persistence> <store shared="true" purge="false"/> </persistence> Additional resources Data Grid Configuration Schema 6.6. Transactions with persistent cache stores Data Grid supports transactional operations with JDBC-based cache stores only. To configure caches as transactional, you set transactional=true to keep data in persistent storage synchronized with data in memory. For all other cache stores, Data Grid does not enlist cache loaders in transactional operations. This can result in data inconsistency if transactions succeed in modifying data in memory but do not completely apply changes to data in the cache store. In these cases manual recovery is not possible with cache stores. 6.7. Global persistent location Data Grid preserves global state so that it can restore cluster topology and cached data after restart. Remote caches Data Grid Server saves cluster state to the USDRHDG_HOME/server/data directory. Important You should never delete or modify the server/data directory or its content. Data Grid restores cluster state from this directory when you restart your server instances. Changing the default configuration or directly modifying the server/data directory can cause unexpected behavior and lead to data loss. Embedded caches Data Grid defaults to the user.dir system property as the global persistent location. In most cases this is the directory where your application starts. For clustered embedded caches, such as replicated or distributed, you should always enable and configure a global persistent location to restore cluster topology. You should never configure an absolute path for a file-based cache store that is outside the global persistent location. If you do, Data Grid writes the following exception to logs: 6.7.1. Configuring the global persistent location Enable and configure the location where Data Grid stores global state for clustered embedded caches. Note Data Grid Server enables global persistence and configures a default location. You should not disable global persistence or change the default configuration for remote caches. Prerequisites Add Data Grid to your project. Procedure Enable global state in one of the following ways: Add the global-state element to your Data Grid configuration. Call the globalState().enable() methods in the GlobalConfigurationBuilder API. Define whether the global persistent location is unique to each node or shared between the cluster. Location type Configuration Unique to each node persistent-location element or persistentLocation() method Shared between the cluster shared-persistent-location element or sharedPersistentLocation(String) method Set the path where Data Grid stores cluster state. For example, file-based cache stores the path is a directory on the host filesystem. Values can be: Absolute and contain the full location including the root. Relative to a root location. If you specify a relative value for the path, you must also specify a system property that resolves to a root location. For example, on a Linux host system you set global/state as the path. You also set the my.data property that resolves to the /opt/data root location. In this case Data Grid uses /opt/data/global/state as the global persistent location. Global persistent location configuration XML <infinispan> <cache-container> <global-state> <persistent-location path="global/state" relative-to="my.data"/> </global-state> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "global-state": { "persistent-location" : { "path" : "global/state", "relative-to" : "my.data" } } } } } YAML cacheContainer: globalState: persistentLocation: path: "global/state" relativeTo : "my.data" GlobalConfigurationBuilder new GlobalConfigurationBuilder().globalState() .enable() .persistentLocation("global/state", "my.data"); Additional resources Data Grid configuration schema org.infinispan.configuration.global.GlobalStateConfiguration 6.8. File-based cache stores File-based cache stores provide persistent storage on the local host filesystem where Data Grid is running. For clustered caches, file-based cache stores are unique to each Data Grid node. Warning Never use filesystem-based cache stores on shared file systems, such as an NFS or Samba share, because they do not provide file locking capabilities and data corruption can occur. Additionally if you attempt to use transactional caches with shared file systems, unrecoverable failures can happen when writing to files during the commit phase. Soft-Index File Stores SoftIndexFileStore is the default implementation for file-based cache stores and stores data in a set of append-only files. When append-only files: Reach their maximum size, Data Grid creates a new file and starts writing to it. Reach the compaction threshold of less than 50% usage, Data Grid overwrites the entries to a new file and then deletes the old file. Note Using SoftIndexFileStore in a clustered cache should enable purge on startup to ensure stale entries are not resurrected. B+ trees To improve performance, append-only files in a SoftIndexFileStore are indexed using a B+ Tree that can be stored both on disk and in memory. The in-memory index uses Java soft references to ensure it can be rebuilt if removed by Garbage Collection (GC) then requested again. Because SoftIndexFileStore uses Java soft references to keep indexes in memory, it helps prevent out-of-memory exceptions. GC removes indexes before they consume too much memory while still falling back to disk. SoftIndexFileStore creates a B+ tree per configured cache segment. This provides an additional "index" as it only has so many elements and provides additional parallelism for index updates. Currently we allow for a parallel amount based on one sixteenth of the number of cache segments. Each entry in the B+ tree is a node. By default, the size of each node is limited to 4096 bytes. SoftIndexFileStore throws an exception if keys are longer after serialization occurs. File limits SoftIndexFileStore will use two plus the configured openFilesLimit amount of files at a given time. The two additional file pointers are reserved for the log appender for newly updated data and another for the compactor which writes compacted entries into a new file. The amount of open allocated files allocated for indexing is one tenth of the total number of the configured openFilesLimit. This number has a minimum of 1 or the number of cache segments. Any number remaning from configured limit is allocated for open data files themselves. Segmentation Soft-index file stores are always segmented. The append log(s) are not directly segmented and segmentation is handled directly by the index. Expiration The SoftIndexFileStore has full support for expired entries and their requirements. Single File Cache Stores Note Single file cache stores are now deprecated and planned for removal. Single File cache stores, SingleFileStore , persist data to file. Data Grid also maintains an in-memory index of keys while keys and values are stored in the file. Because SingleFileStore keeps an in-memory index of keys and the location of values, it requires additional memory, depending on the key size and the number of keys. For this reason, SingleFileStore is not recommended for use cases where the keys are larger or there can be a larger number of them. In some cases, SingleFileStore can also become fragmented. If the size of values continually increases, available space in the single file is not used but the entry is appended to the end of the file. Available space in the file is used only if an entry can fit within it. Likewise, if you remove all entries from memory, the single file store does not decrease in size or become defragmented. Segmentation Single file cache stores are segmented by default with a separate instance per segment, which results in multiple directories. Each directory is a number that represents the segment to which the data maps. 6.8.1. Configuring file-based cache stores Add file-based cache stores to Data Grid to persist data on the host filesystem. Prerequisites Enable global state and configure a global persistent location if you are configuring embedded caches. Procedure Add the persistence element to your cache configuration. Optionally specify true as the value for the passivation attribute to write to the file-based cache store only when data is evicted from memory. Include the file-store element and configure attributes as appropriate. Specify false as the value for the shared attribute. File-based cache stores should always be unique to each Data Grid instance. If you want to use the same persistent across a cluster, configure shared storage such as a JDBC string-based cache store . Configure the index and data elements to specify the location where Data Grid creates indexes and stores data. Include the write-behind element if you want to configure the cache store with write-behind mode. File-based cache store configuration XML <distributed-cache> <persistence passivation="true"> <file-store shared="false"> <data path="data"/> <index path="index"/> <write-behind modification-queue-size="2048" /> </file-store> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "passivation": true, "file-store" : { "shared": false, "data": { "path": "data" }, "index": { "path": "index" }, "write-behind": { "modification-queue-size": "2048" } } } } } YAML distributedCache: persistence: passivation: "true" fileStore: shared: "false" data: path: "data" index: path: "index" writeBehind: modificationQueueSize: "2048" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().passivation(true) .addSoftIndexFileStore() .shared(false) .dataLocation("data") .indexLocation("index") .modificationQueueSize(2048); 6.8.2. Configuring single file cache stores If required, you can configure Data Grid to create single file stores. Important Single file stores are deprecated. You should use soft-index file stores for better performance and data consistency in comparison with single file stores. Prerequisites Enable global state and configure a global persistent location if you are configuring embedded caches. Procedure Add the persistence element to your cache configuration. Optionally specify true as the value for the passivation attribute to write to the file-based cache store only when data is evicted from memory. Include the single-file-store element. Specify false as the value for the shared attribute. Configure any other attributes as appropriate. Include the write-behind element to configure the cache store as write behind instead of as write through. Single file cache store configuration XML <distributed-cache> <persistence passivation="true"> <single-file-store shared="false" preload="true"/> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence" : { "passivation" : true, "single-file-store" : { "shared" : false, "preload" : true } } } } YAML distributedCache: persistence: passivation: "true" singleFileStore: shared: "false" preload: "true" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().passivation(true) .addStore(SingleFileStoreConfigurationBuilder.class) .shared(false) .preload(true); 6.9. JDBC connection factories Data Grid provides different ConnectionFactory implementations that allow you to connect to databases. You use JDBC connections with SQL cache stores and JDBC string-based caches stores. Connection pools Connection pools are suitable for standalone Data Grid deployments and are based on Agroal. XML <distributed-cache> <persistence> <connection-pool connection-url="jdbc:h2:mem:infinispan;DB_CLOSE_DELAY=-1" username="sa" password="changeme" driver="org.h2.Driver"/> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "connection-pool": { "connection-url": "jdbc:h2:mem:infinispan_string_based", "driver": "org.h2.Driver", "username": "sa", "password": "changeme" } } } } YAML distributedCache: persistence: connectionPool: connectionUrl: "jdbc:h2:mem:infinispan_string_based;DB_CLOSE_DELAY=-1" driver: org.h2.Driver username: sa password: changeme ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence() .connectionPool() .connectionUrl("jdbc:h2:mem:infinispan_string_based;DB_CLOSE_DELAY=-1") .username("sa") .driverClass("org.h2.Driver"); Managed datasources Datasource connections are suitable for managed environments such as application servers. XML <distributed-cache> <persistence> <data-source jndi-url="java:/StringStoreWithManagedConnectionTest/DS" /> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "data-source": { "jndi-url": "java:/StringStoreWithManagedConnectionTest/DS" } } } } YAML distributedCache: persistence: dataSource: jndiUrl: "java:/StringStoreWithManagedConnectionTest/DS" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence() .dataSource() .jndiUrl("java:/StringStoreWithManagedConnectionTest/DS"); Simple connections Simple connection factories create database connections on a per invocation basis and are intended for use with test or development environments only. XML <distributed-cache> <persistence> <simple-connection connection-url="jdbc:h2://localhost" username="sa" password="changeme" driver="org.h2.Driver"/> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "simple-connection": { "connection-url": "jdbc:h2://localhost", "driver": "org.h2.Driver", "username": "sa", "password": "changeme" } } } } YAML distributedCache: persistence: simpleConnection: connectionUrl: "jdbc:h2://localhost" driver: org.h2.Driver username: sa password: changeme ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence() .simpleConnection() .connectionUrl("jdbc:h2://localhost") .driverClass("org.h2.Driver") .username("admin") .password("changeme"); Additional resources PooledConnectionFactoryConfigurationBuilder ManagedConnectionFactoryConfigurationBuilder SimpleConnectionFactoryConfigurationBuilder 6.9.1. Configuring managed datasources Create managed datasources as part of your Data Grid Server configuration to optimize connection pooling and performance for JDBC database connections. You can then specify the JDNI name of the managed datasources in your caches, which centralizes JDBC connection configuration for your deployment. Prerequisites Copy database drivers to the server/lib directory in your Data Grid Server installation. Tip Use the install command with the Data Grid Command Line Interface (CLI) to download the required drivers to the server/lib directory, for example: Procedure Open your Data Grid Server configuration for editing. Add a new data-source to the data-sources section. Uniquely identify the datasource with the name attribute or field. Specify a JNDI name for the datasource with the jndi-name attribute or field. Tip You use the JNDI name to specify the datasource in your JDBC cache store configuration. Set true as the value of the statistics attribute or field to enable statistics for the datasource through the /metrics endpoint. Provide JDBC driver details that define how to connect to the datasource in the connection-factory section. Specify the name of the database driver with the driver attribute or field. Specify the JDBC connection url with the url attribute or field. Specify credentials with the username and password attributes or fields. Provide any other configuration as appropriate. Define how Data Grid Server nodes pool and reuse connections with connection pool tuning properties in the connection-pool section. Save the changes to your configuration. Verification Use the Data Grid Command Line Interface (CLI) to test the datasource connection, as follows: Start a CLI session. List all datasources and confirm the one you created is available. Test a datasource connection. Managed datasource configuration XML <server xmlns="urn:infinispan:server:14.0"> <data-sources> <!-- Defines a unique name for the datasource and JNDI name that you reference in JDBC cache store configuration. Enables statistics for the datasource, if required. --> <data-source name="ds" jndi-name="jdbc/postgres" statistics="true"> <!-- Specifies the JDBC driver that creates connections. --> <connection-factory driver="org.postgresql.Driver" url="jdbc:postgresql://localhost:5432/postgres" username="postgres" password="changeme"> <!-- Sets optional JDBC driver-specific connection properties. --> <connection-property name="name">value</connection-property> </connection-factory> <!-- Defines connection pool tuning properties. --> <connection-pool initial-size="1" max-size="10" min-size="3" background-validation="1000" idle-removal="1" blocking-timeout="1000" leak-detection="10000"/> </data-source> </data-sources> </server> JSON { "server": { "data-sources": [{ "name": "ds", "jndi-name": "jdbc/postgres", "statistics": true, "connection-factory": { "driver": "org.postgresql.Driver", "url": "jdbc:postgresql://localhost:5432/postgres", "username": "postgres", "password": "changeme", "connection-properties": { "name": "value" } }, "connection-pool": { "initial-size": 1, "max-size": 10, "min-size": 3, "background-validation": 1000, "idle-removal": 1, "blocking-timeout": 1000, "leak-detection": 10000 } }] } } YAML server: dataSources: - name: ds jndiName: 'jdbc/postgres' statistics: true connectionFactory: driver: "org.postgresql.Driver" url: "jdbc:postgresql://localhost:5432/postgres" username: "postgres" password: "changeme" connectionProperties: name: value connectionPool: initialSize: 1 maxSize: 10 minSize: 3 backgroundValidation: 1000 idleRemoval: 1 blockingTimeout: 1000 leakDetection: 10000 6.9.1.1. Configuring caches with JNDI names When you add a managed datasource to Data Grid Server you can add the JNDI name to a JDBC-based cache store configuration. Prerequisites Configure Data Grid Server with a managed datasource. Procedure Open your cache configuration for editing. Add the data-source element or field to the JDBC-based cache store configuration. Specify the JNDI name of the managed datasource as the value of the jndi-url attribute. Configure the JDBC-based cache stores as appropriate. Save the changes to your configuration. JNDI name in cache configuration XML <distributed-cache> <persistence> <jdbc:string-keyed-jdbc-store> <!-- Specifies the JNDI name of a managed datasource on Data Grid Server. --> <jdbc:data-source jndi-url="jdbc/postgres"/> <jdbc:string-keyed-table drop-on-exit="true" create-on-start="true" prefix="TBL"> <jdbc:id-column name="ID" type="VARCHAR(255)"/> <jdbc:data-column name="DATA" type="BYTEA"/> <jdbc:timestamp-column name="TS" type="BIGINT"/> <jdbc:segment-column name="S" type="INT"/> </jdbc:string-keyed-table> </jdbc:string-keyed-jdbc-store> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "string-keyed-jdbc-store": { "data-source": { "jndi-url": "jdbc/postgres" }, "string-keyed-table": { "prefix": "TBL", "drop-on-exit": true, "create-on-start": true, "id-column": { "name": "ID", "type": "VARCHAR(255)" }, "data-column": { "name": "DATA", "type": "BYTEA" }, "timestamp-column": { "name": "TS", "type": "BIGINT" }, "segment-column": { "name": "S", "type": "INT" } } } } } } YAML distributedCache: persistence: stringKeyedJdbcStore: dataSource: jndi-url: "jdbc/postgres" stringKeyedTable: prefix: "TBL" dropOnExit: true createOnStart: true idColumn: name: "ID" type: "VARCHAR(255)" dataColumn: name: "DATA" type: "BYTEA" timestampColumn: name: "TS" type: "BIGINT" segmentColumn: name: "S" type: "INT" 6.9.1.2. Connection pool tuning properties You can tune JDBC connection pools for managed datasources in your Data Grid Server configuration. Property Description initial-size Initial number of connections the pool should hold. max-size Maximum number of connections in the pool. min-size Minimum number of connections the pool should hold. blocking-timeout Maximum time in milliseconds to block while waiting for a connection before throwing an exception. This will never throw an exception if creating a new connection takes an inordinately long period of time. Default is 0 meaning that a call will wait indefinitely. background-validation Time in milliseconds between background validation runs. A duration of 0 means that this feature is disabled. validate-on-acquisition Connections idle for longer than this time, specified in milliseconds, are validated before being acquired (foreground validation). A duration of 0 means that this feature is disabled. idle-removal Time in minutes a connection has to be idle before it can be removed. leak-detection Time in milliseconds a connection has to be held before a leak warning. 6.9.2. Configuring JDBC connection pools with Agroal properties You can use a properties file to configure pooled connection factories for JDBC string-based cache stores. Procedure Specify JDBC connection pool configuration with org.infinispan.agroal.* properties, as in the following example: org.infinispan.agroal.metricsEnabled=false org.infinispan.agroal.minSize=10 org.infinispan.agroal.maxSize=100 org.infinispan.agroal.initialSize=20 org.infinispan.agroal.acquisitionTimeout_s=1 org.infinispan.agroal.validationTimeout_m=1 org.infinispan.agroal.leakTimeout_s=10 org.infinispan.agroal.reapTimeout_m=10 org.infinispan.agroal.metricsEnabled=false org.infinispan.agroal.autoCommit=true org.infinispan.agroal.jdbcTransactionIsolation=READ_COMMITTED org.infinispan.agroal.jdbcUrl=jdbc:h2:mem:PooledConnectionFactoryTest;DB_CLOSE_DELAY=-1 org.infinispan.agroal.driverClassName=org.h2.Driver.class org.infinispan.agroal.principal=sa org.infinispan.agroal.credential=sa Configure Data Grid to use your properties file with the properties-file attribute or the PooledConnectionFactoryConfiguration.propertyFile() method. XML <connection-pool properties-file="path/to/agroal.properties"/> JSON "persistence": { "connection-pool": { "properties-file": "path/to/agroal.properties" } } YAML persistence: connectionPool: propertiesFile: path/to/agroal.properties ConfigurationBuilder .connectionPool().propertyFile("path/to/agroal.properties") Additional resources Agroal 6.10. SQL cache stores SQL cache stores let you load Data Grid caches from existing database tables. Data Grid offers two types of SQL cache store: Table Data Grid loads entries from a single database table. Query Data Grid uses SQL queries to load entries from single or multiple database tables, including from sub-columns within those tables, and perform insert, update, and delete operations. Tip Visit the code tutorials to try a SQL cache store in action. See the Persistence code tutorial with remote caches . Both SQL table and query stores: Allow read and write operations to persistent storage. Can be read-only and act as a cache loader. Support keys and values that correspond to a single database column or a composite of multiple database columns. For composite keys and values, you must provide Data Grid with Protobuf schema ( .proto files) that describe the keys and values. With Data Grid Server you can add schema through the Data Grid Console or Command Line Interface (CLI) with the schema command. Warning The SQL cache store is intended for use with an existing database table. As a result, it does not store any metadata, which includes expiration, segments, and, versioning metadata. Due to the absence of version storage, SQL store does not support optimistic transactional caching and asynchronous cross-site replication. This limitation also extends to Hot Rod versioned operations. Tip Use expiration with the SQL cache store when it is configured as read only. Expiration removes stale values from memory, causing the cache to fetch the values from the database again and cache them anew. Additional resources DatabaseType Enum lists supported database dialects Data Grid SQL store configuration reference 6.10.1. Data types for keys and values Data Grid loads keys and values from columns in database tables via SQL cache stores, automatically using the appropriate data types. The following CREATE statement adds a table named "books" that has two columns, isbn and title : Database table with two columns CREATE TABLE books ( isbn NUMBER(13), title varchar(120) PRIMARY KEY(isbn) ); When you use this table with a SQL cache store, Data Grid adds an entry to the cache using the isbn column as the key and the title column as the value. Additional resources Data Grid SQL store configuration reference 6.10.1.1. Composite keys and values You can use SQL stores with database tables that contain composite primary keys or composite values. To use composite keys or values, you must provide Data Grid with Protobuf schema that describe the data types. You must also add schema configuration to your SQL store and specify the message names for keys and values. Tip Data Grid recommends generating Protobuf schema with the ProtoStream processor. You can then upload your Protobuf schema for remote caches through the Data Grid Console, CLI, or REST API. Composite values The following database table holds a composite value of the title and author columns: CREATE TABLE books ( isbn NUMBER(13), title varchar(120), author varchar(80) PRIMARY KEY(isbn) ); Data Grid adds an entry to the cache using the isbn column as the key. For the value, Data Grid requires a Protobuf schema that maps the title column and the author columns: package library; message books_value { optional string title = 1; optional string author = 2; } Composite keys and values The following database table holds a composite primary key and a composite value, with two columns each: CREATE TABLE books ( isbn NUMBER(13), reprint INT, title varchar(120), author varchar(80) PRIMARY KEY(isbn, reprint) ); For both the key and the value, Data Grid requires a Protobuf schema that maps the columns to keys and values: package library; message books_key { required string isbn = 1; required int32 reprint = 2; } message books_value { optional string title = 1; optional string author = 2; } Additional resources Cache encoding and marshalling: Generate Protobuf schema and register them with Data Grid Data Grid SQL store configuration reference 6.10.1.2. Embedded keys Protobuf schema can include keys within values, as in the following example: Protobuf schema with an embedded key package library; message books_key { required string isbn = 1; required int32 reprint = 2; } message books_value { required string isbn = 1; required string reprint = 2; optional string title = 3; optional string author = 4; } To use embedded keys, you must include the embedded-key="true" attribute or embeddedKey(true) method in your SQL store configuration. 6.10.1.3. SQL types to Protobuf types The following table contains default mappings of SQL data types to Protobuf data types: SQL type Protobuf type int4 int32 int8 int64 float4 float float8 double numeric double bool bool char string varchar string text , tinytext , mediumtext , longtext string bytea , tinyblob , blob , mediumblob , longblob bytes Additional resources Cache encoding and marshalling 6.10.2. Loading Data Grid caches from database tables Add a SQL table cache store to your configuration if you want Data Grid to load data from a database table. When it connects to the database, Data Grid uses metadata from the table to detect column names and data types. Data Grid also automatically determines which columns in the database are part of the primary key. Prerequisites Have JDBC connection details. You can add JDBC connection factories directly to your cache configuration. For remote caches in production environments, you should add managed datasources to Data Grid Server configuration and specify the JNDI name in the cache configuration. Generate Protobuf schema for any composite keys or composite values and register your schemas with Data Grid. Tip Data Grid recommends generating Protobuf schema with the ProtoStream processor. For remote caches, you can register your schemas by adding them through the Data Grid Console, CLI, or REST API. Procedure Add database drivers to your Data Grid deployment. Remote caches: Copy database drivers to the server/lib directory in your Data Grid Server installation. Tip Use the install command with the Data Grid Command Line Interface (CLI) to download the required drivers to the server/lib directory, for example: Embedded caches: Add the infinispan-cachestore-sql dependency to your pom file. <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-cachestore-sql</artifactId> </dependency> Open your Data Grid configuration for editing. Add a SQL table cache store. Declarative table-jdbc-store xmlns="urn:infinispan:config:store:sql:14.0" Programmatic persistence().addStore(TableJdbcStoreConfigurationBuilder.class) Specify the database dialect with either dialect="" or dialect() , for example dialect="H2" or dialect="postgres" . Configure the SQL cache store with the properties you require, for example: To use the same cache store across your cluster, set shared="true" or shared(true) . To create a read only cache store, set read-only="true" or .ignoreModifications(true) . Name the database table that loads the cache with table-name="<database_table_name>" or table.name("<database_table_name>") . Add the schema element or the .schemaJdbcConfigurationBuilder() method and add Protobuf schema configuration for composite keys or values. Specify the package name with the package attribute or package() method. Specify composite values with the message-name attribute or messageName() method. Specify composite keys with the key-message-name attribute or keyMessageName() method. Set a value of true for the embedded-key attribute or embeddedKey() method if your schema includes keys within values. Save the changes to your configuration. SQL table store configuration The following example loads a distributed cache from a database table named "books" using composite values defined in a Protobuf schema: XML <distributed-cache> <persistence> <table-jdbc-store xmlns="urn:infinispan:config:store:sql:14.0" dialect="H2" shared="true" table-name="books"> <schema message-name="books_value" package="library"/> </table-jdbc-store> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "table-jdbc-store": { "dialect": "H2", "shared": "true", "table-name": "books", "schema": { "message-name": "books_value", "package": "library" } } } } } YAML distributedCache: persistence: tableJdbcStore: dialect: "H2" shared: "true" tableName: "books" schema: messageName: "books_value" package: "library" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().addStore(TableJdbcStoreConfigurationBuilder.class) .dialect(DatabaseType.H2) .shared("true") .tableName("books") .schemaJdbcConfigurationBuilder() .messageName("books_value") .packageName("library"); Additional resources Cache encoding and marshalling: Generate Protobuf schema and register them with Data Grid Persistence code tutorial with remote caches JDBC connection factories DatabaseType Enum lists supported database dialects Data Grid SQL store configuration reference 6.10.3. Using SQL queries to load data and perform operations SQL query cache stores let you load caches from multiple database tables, including from sub-columns in database tables, and perform insert, update, and delete operations. Prerequisites Have JDBC connection details. You can add JDBC connection factories directly to your cache configuration. For remote caches in production environments, you should add managed datasources to Data Grid Server configuration and specify the JNDI name in the cache configuration. Generate Protobuf schema for any composite keys or composite values and register your schemas with Data Grid. Tip Data Grid recommends generating Protobuf schema with the ProtoStream processor. For remote caches, you can register your schemas by adding them through the Data Grid Console, CLI, or REST API. Procedure Add database drivers to your Data Grid deployment. Remote caches: Copy database drivers to the server/lib directory in your Data Grid Server installation. Tip Use the install command with the Data Grid Command Line Interface (CLI) to download the required drivers to the server/lib directory, for example: Embedded caches: Add the infinispan-cachestore-sql dependency to your pom file and make sure database drivers are on your application classpath. <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-cachestore-sql</artifactId> </dependency> Open your Data Grid configuration for editing. Add a SQL query cache store. Declarative query-jdbc-store xmlns="urn:infinispan:config:store:sql:14.0" Programmatic persistence().addStore(QueriesJdbcStoreConfigurationBuilder.class) Specify the database dialect with either dialect="" or dialect() , for example dialect="H2" or dialect="postgres" . Configure the SQL cache store with the properties you require, for example: To use the same cache store across your cluster, set shared="true" or shared(true) . To create a read only cache store, set read-only="true" or .ignoreModifications(true) . Define SQL query statements that load caches with data and modify database tables with the queries element or the queries() method. Query statement Description SELECT Loads a single entry into caches. You can use wildcards but must specify parameters for keys. You can use labelled expressions. SELECT ALL Loads multiple entries into caches. You can use the * wildcard if the number of columns returned match the key and value columns. You can use labelled expressions. SIZE Counts the number of entries in the cache. DELETE Deletes a single entry from the cache. DELETE ALL Deletes all entries from the cache. UPSERT Modifies entries in the cache. Note DELETE , DELETE ALL , and UPSERT statements do not apply to read only cache stores but are required if cache stores allow modifications. Parameters in DELETE statements must match parameters in SELECT statements exactly. Variables in UPSERT statements must have the same number of uniquely named variables that SELECT and SELECT ALL statements return. For example, if SELECT returns foo and bar this statement must take only :foo and :bar as variables. However you can apply the same named variable more than once in a statement. SQL queries can include JOIN , ON , and any other clauses that the database supports. Add the schema element or the .schemaJdbcConfigurationBuilder() method and add Protobuf schema configuration for composite keys or values. Specify the package name with the package attribute or package() method. Specify composite values with the message-name attribute or messageName() method. Specify composite keys with the key-message-name attribute or keyMessageName() method. Set a value of true for the embedded-key attribute or embeddedKey() method if your schema includes keys within values. Save the changes to your configuration. Additional resources Cache encoding and marshalling: Generate Protobuf schema and register them with Data Grid Persistence code tutorial with remote caches JDBC connection factories DatabaseType Enum lists supported database dialects Data Grid SQL store configuration reference 6.10.3.1. SQL query store configuration This section provides an example configuration for a SQL query cache store that loads a distributed cache with data from two database tables: "person" and "address". SQL statements The following examples show SQL data definition language (DDL) statements for the "person" and "address" tables. The data types described in the example are only valid for PostgreSQL database. SQL statement for the "person" table CREATE TABLE Person ( name VARCHAR(255) NOT NULL, picture BYTEA, sex VARCHAR(255), birthdate TIMESTAMP, accepted_tos BOOLEAN, notused VARCHAR(255), PRIMARY KEY (name) ); SQL statement for the "address" table CREATE TABLE Address ( name VARCHAR(255) NOT NULL, street VARCHAR(255), city VARCHAR(255), zip INT, PRIMARY KEY (name) ); Protobuf schemas Protobuf schema for the "person" and "address" tables are as follows: Protobuf schema for the "address" table package com.example; message Address { optional string street = 1; optional string city = 2 [default = "San Jose"]; optional int32 zip = 3 [default = 0]; } Protobuf schema for the "person" table package com.example; import "/path/to/address.proto"; enum Sex { FEMALE = 1; MALE = 2; } message Person { optional string name = 1; optional Address address = 2; optional bytes picture = 3; optional Sex sex = 4; optional fixed64 birthDate = 5 [default = 0]; optional bool accepted_tos = 6 [default = false]; } Cache configuration The following example loads a distributed cache from the "person" and "address" tables using a SQL query that includes a JOIN clause: XML <distributed-cache> <persistence> <query-jdbc-store xmlns="urn:infinispan:config:store:sql:14.0" dialect="POSTGRES" shared="true" key-columns="name"> <connection-pool driver="org.postgresql.Driver" connection-url="jdbc:postgresql://localhost:5432/postgres" username="postgres" password="changeme"/> <queries select-single="SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name" select-all="SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name" delete-single="DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name" delete-all="DELETE FROM Person; DELETE FROM Address" upsert="INSERT INTO Person (name, picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)" size="SELECT COUNT(*) FROM Person" /> <schema message-name="Person" package="com.example" embedded-key="true"/> </query-jdbc-store> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "query-jdbc-store": { "dialect": "POSTGRES", "shared": "true", "key-columns": "name", "connection-pool": { "username": "postgres", "password": "changeme", "driver": "org.postgresql.Driver", "connection-url": "jdbc:postgresql://localhost:5432/postgres" }, "queries": { "select-single": "SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name", "select-all": "SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name", "delete-single": "DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name", "delete-all": "DELETE FROM Person; DELETE FROM Address", "upsert": "INSERT INTO Person (name, picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)", "size": "SELECT COUNT(*) FROM Person" }, "schema": { "message-name": "Person", "package": "com.example", "embedded-key": "true" } } } } } YAML distributedCache: persistence: queryJdbcStore: dialect: "POSTGRES" shared: "true" keyColumns: "name" connectionPool: username: "postgres" password: "changeme" driver: "org.postgresql.Driver" connectionUrl: "jdbc:postgresql://localhost:5432/postgres" queries: selectSingle: "SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name" selectAll: "SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name" deleteSingle: "DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name" deleteAll: "DELETE FROM Person; DELETE FROM Address" upsert: "INSERT INTO Person (name, picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)" size: "SELECT COUNT(*) FROM Person" schema: messageName: "Person" package: "com.example" embeddedKey: "true" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().addStore(QueriesJdbcStoreConfigurationBuilder.class) .dialect(DatabaseType.POSTGRES) .shared("true") .keyColumns("name") .queriesJdbcConfigurationBuilder() .select("SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name") .selectAll("SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name") .delete("DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name") .deleteAll("DELETE FROM Person; DELETE FROM Address") .upsert("INSERT INTO Person (name, picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)") .size("SELECT COUNT(*) FROM Person") .schemaJdbcConfigurationBuilder() .messageName("Person") .packageName("com.example") .embeddedKey(true); Additional resources Data Grid SQL store configuration reference 6.10.4. SQL cache store troubleshooting Find out about common issues and errors with SQL cache stores and how to troubleshoot them. Data Grid logs this message in the following cases: The database table does not exist. The database table name is case sensitive and needs to be either all lower case or all upper case, depending on the database provider. The database table does not have any primary keys defined. To resolve this issue you should: Check your SQL cache store configuration and ensure that you specify the name of an existing table. Ensure that the database table name conforms to an case sensitivity requirements. Ensure that your database tables have primary keys that uniquely identify the appropriate rows. 6.11. JDBC string-based cache stores JDBC String-Based cache stores, JdbcStringBasedStore , use JDBC drivers to load and store values in the underlying database. JDBC String-Based cache stores: Store each entry in its own row in the table to increase throughput for concurrent loads. Use a simple one-to-one mapping that maps each key to a String object using the key-to-string-mapper interface. Data Grid provides a default implementation, DefaultTwoWayKey2StringMapper , that handles primitive types. In addition to the data table used to store cache entries, the store also creates a _META table for storing metadata. This table is used to ensure that any existing database content is compatible with the current Data Grid version and configuration. Note By default Data Grid shares are not stored, which means that all nodes in the cluster write to the underlying store on each update. If you want operations to write to the underlying database once only, you must configure JDBC store as shared. Segmentation JdbcStringBasedStore uses segmentation by default and requires a column in the database table to represent the segments to which entries belong. Additional resources DatabaseType Enum lists supported database dialects 6.11.1. Configuring JDBC string-based cache stores Configure Data Grid caches with JDBC string-based cache stores that can connect to databases. Prerequisites Remote caches: Copy database drivers to the server/lib directory in your Data Grid Server installation. Embedded caches: Add the infinispan-cachestore-jdbc dependency to your pom file. <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-cachestore-jdbc</artifactId> </dependency> Procedure Create a JDBC string-based cache store configuration in one of the following ways: Declaratively, add the persistence element or field then add string-keyed-jdbc-store with the following schema namespace: xmlns="urn:infinispan:config:store:jdbc:14.0" Programmatically, add the following methods to your ConfigurationBuilder : persistence().addStore(JdbcStringBasedStoreConfigurationBuilder.class) Specify the dialect of the database with either the dialect attribute or the dialect() method. Configure any properties for the JDBC string-based cache store as appropriate. For example, specify if the cache store is shared with multiple cache instances with either the shared attribute or the shared() method. Add a JDBC connection factory so that Data Grid can connect to the database. Add a database table that stores cache entries. Important Configuring the string-keyed-jdbc-store with inappropriate data type can lead to exceptions during loading or storing cache entries. For more information and a list of data types that are tested as part of the Data Grid release, see Tested database settings for Data Grid string-keyed-jdbc-store persistence (Login required) . JDBC string-based cache store configuration XML <distributed-cache> <persistence> <string-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:14.0" dialect="H2"> <connection-pool connection-url="jdbc:h2:mem:infinispan" username="sa" password="changeme" driver="org.h2.Driver"/> <string-keyed-table create-on-start="true" prefix="ISPN_STRING_TABLE"> <id-column name="ID_COLUMN" type="VARCHAR(255)" /> <data-column name="DATA_COLUMN" type="BINARY" /> <timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT" /> <segment-column name="SEGMENT_COLUMN" type="INT"/> </string-keyed-table> </string-keyed-jdbc-store> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "string-keyed-jdbc-store": { "dialect": "H2", "string-keyed-table": { "prefix": "ISPN_STRING_TABLE", "create-on-start": true, "id-column": { "name": "ID_COLUMN", "type": "VARCHAR(255)" }, "data-column": { "name": "DATA_COLUMN", "type": "BINARY" }, "timestamp-column": { "name": "TIMESTAMP_COLUMN", "type": "BIGINT" }, "segment-column": { "name": "SEGMENT_COLUMN", "type": "INT" } }, "connection-pool": { "connection-url": "jdbc:h2:mem:infinispan", "driver": "org.h2.Driver", "username": "sa", "password": "changeme" } } } } } YAML distributedCache: persistence: stringKeyedJdbcStore: dialect: "H2" stringKeyedTable: prefix: "ISPN_STRING_TABLE" createOnStart: true idColumn: name: "ID_COLUMN" type: "VARCHAR(255)" dataColumn: name: "DATA_COLUMN" type: "BINARY" timestampColumn: name: "TIMESTAMP_COLUMN" type: "BIGINT" segmentColumn: name: "SEGMENT_COLUMN" type: "INT" connectionPool: connectionUrl: "jdbc:h2:mem:infinispan" driver: "org.h2.Driver" username: "sa" password: "changeme" ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().addStore(JdbcStringBasedStoreConfigurationBuilder.class) .dialect(DatabaseType.H2) .table() .dropOnExit(true) .createOnStart(true) .tableNamePrefix("ISPN_STRING_TABLE") .idColumnName("ID_COLUMN").idColumnType("VARCHAR(255)") .dataColumnName("DATA_COLUMN").dataColumnType("BINARY") .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType("BIGINT") .segmentColumnName("SEGMENT_COLUMN").segmentColumnType("INT") .connectionPool() .connectionUrl("jdbc:h2:mem:infinispan") .username("sa") .password("changeme") .driverClass("org.h2.Driver"); Additional resources JDBC connection factories 6.12. RocksDB cache stores RocksDB provides key-value filesystem-based storage with high performance and reliability for highly concurrent environments. RocksDB cache stores, RocksDBStore , use two databases. One database provides a primary cache store for data in memory; the other database holds entries that Data Grid expires from memory. Table 6.1. Configuration parameters Parameter Description location Specifies the path to the RocksDB database that provides the primary cache store. If you do not set the location, it is automatically created. Note that the path must be relative to the global persistent location. expiredLocation Specifies the path to the RocksDB database that provides the cache store for expired data. If you do not set the location, it is automatically created. Note that the path must be relative to the global persistent location. expiryQueueSize Sets the size of the in-memory queue for expiring entries. When the queue reaches the size, Data Grid flushes the expired into the RocksDB cache store. clearThreshold Sets the maximum number of entries before deleting and re-initializing ( re-init ) the RocksDB database. For smaller size cache stores, iterating through all entries and removing each one individually can provide a faster method. Tuning parameters You can also specify the following RocksDB tuning parameters: compressionType blockSize cacheSize Configuration properties Optionally set properties in the configuration as follows: Prefix properties with database to adjust and tune RocksDB databases. Prefix properties with data to configure the column families in which RocksDB stores your data. Segmentation RocksDBStore supports segmentation and creates a separate column family per segment. Segmented RocksDB cache stores improve lookup performance and iteration but slightly lower performance of write operations. Note You should not configure more than a few hundred segments. RocksDB is not designed to have an unlimited number of column families. Too many segments also significantly increases cache store start time. RocksDB cache store configuration XML <local-cache> <persistence> <rocksdb-store xmlns="urn:infinispan:config:store:rocksdb:14.0" path="rocksdb/data"> <expiration path="rocksdb/expired"/> </rocksdb-store> </persistence> </local-cache> JSON { "local-cache": { "persistence": { "rocksdb-store": { "path": "rocksdb/data", "expiration": { "path": "rocksdb/expired" } } } } } YAML localCache: persistence: rocksdbStore: path: "rocksdb/data" expiration: path: "rocksdb/expired" ConfigurationBuilder Configuration cacheConfig = new ConfigurationBuilder().persistence() .addStore(RocksDBStoreConfigurationBuilder.class) .build(); EmbeddedCacheManager cacheManager = new DefaultCacheManager(cacheConfig); Cache<String, User> usersCache = cacheManager.getCache("usersCache"); usersCache.put("raytsang", new User(...)); ConfigurationBuilder with properties Properties props = new Properties(); props.put("database.max_background_compactions", "2"); props.put("data.write_buffer_size", "512MB"); Configuration cacheConfig = new ConfigurationBuilder().persistence() .addStore(RocksDBStoreConfigurationBuilder.class) .location("rocksdb/data") .expiredLocation("rocksdb/expired") .properties(props) .build(); Reference RocksDB cache store configuration schema RocksDBStore RocksDBStoreConfiguration rocksdb.org RocksDB Tuning Guide 6.13. Remote cache stores Remote cache stores, RemoteStore , use the Hot Rod protocol to store data on Data Grid clusters. Note If you configure remote cache stores as shared you cannot preload data. In other words if shared="true" in your configuration then you must set preload="false" . Segmentation RemoteStore supports segmentation and can publish keys and entries by segment, which makes bulk operations more efficient. However, segmentation is available only with Data Grid Hot Rod protocol version 2.3 or later. Warning When you enable segmentation for RemoteStore , it uses the number of segments that you define in your Data Grid server configuration. If the source cache is segmented and uses a different number of segments than RemoteStore , then incorrect values are returned for bulk operations. In this case, you should disable segmentation for RemoteStore . Remote cache store configuration XML <distributed-cache> <persistence> <remote-store xmlns="urn:infinispan:config:store:remote:14.0" cache="mycache" raw-values="true"> <remote-server host="one" port="12111" /> <remote-server host="two" /> <connection-pool max-active="10" exhausted-action="CREATE_NEW" /> </remote-store> </persistence> </distributed-cache> JSON { "distributed-cache": { "remote-store": { "cache": "mycache", "raw-values": "true", "remote-server": [ { "host": "one", "port": "12111" }, { "host": "two" } ], "connection-pool": { "max-active": "10", "exhausted-action": "CREATE_NEW" } } } } YAML distributedCache: remoteStore: cache: "mycache" rawValues: "true" remoteServer: - host: "one" port: "12111" - host: "two" connectionPool: maxActive: "10" exhaustedAction: "CREATE_NEW" ConfigurationBuilder ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence().addStore(RemoteStoreConfigurationBuilder.class) .ignoreModifications(false) .purgeOnStartup(false) .remoteCacheName("mycache") .rawValues(true) .addServer() .host("one").port(12111) .addServer() .host("two") .connectionPool() .maxActive(10) .exhaustedAction(ExhaustedAction.CREATE_NEW) .async().enable(); Reference Remote cache store configuration schema RemoteStore RemoteStoreConfigurationBuilder 6.14. Cluster cache loaders ClusterCacheLoader retrieves data from other Data Grid cluster members but does not persist data. In other words, ClusterCacheLoader is not a cache store. Warning ClusterLoader is deprecated and planned for removal in a future version. ClusterCacheLoader provides a non-blocking partial alternative to state transfer. ClusterCacheLoader fetches keys from other nodes on demand if those keys are not available on the local node, which is similar to lazily loading cache content. The following points also apply to ClusterCacheLoader : Preloading does not take effect ( preload=true ). Segmentation is not supported. Cluster cache loader configuration XML <distributed-cache> <persistence> <cluster-loader preload="true" remote-timeout="500"/> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence" : { "cluster-loader" : { "preload" : true, "remote-timeout" : "500" } } } } YAML distributedCache: persistence: clusterLoader: preload: "true" remoteTimeout: "500" ConfigurationBuilder ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addClusterLoader() .remoteCallTimeout(500); Additional resources Data Grid configuration schema ClusterLoader ClusterLoaderConfiguration 6.15. Creating custom cache store implementations You can create custom cache stores through the Data Grid persistent SPI. 6.15.1. Data Grid Persistence SPI The Data Grid Service Provider Interface (SPI) enables read and write operations to external storage through the NonBlockingStore interface and has the following features: Portability across JCache-compliant vendors Data Grid maintains compatibility between the NonBlockingStore interface and the JSR-107 JCache specification by using an adapter that handles blocking code. Simplified transaction integration Data Grid automatically handles locking so your implementations do not need to coordinate concurrent access to persistent stores. Depending on the locking mode you use, concurrent writes to the same key generally do not occur. However, you should expect operations on the persistent storage to originate from multiple threads and create implementations to tolerate this behavior. Parallel iteration Data Grid lets you iterate over entries in persistent stores with multiple threads in parallel. Reduced serialization resulting in less CPU usage Data Grid exposes stored entries in a serialized format that can be transmitted remotely. For this reason, Data Grid does not need to deserialize entries that it retrieves from persistent storage and then serialize again when writing to the wire. Additional resources Persistence SPI NonBlockingStore JSR-107 6.15.2. Creating cache stores Create custom cache stores with implementations of the NonBlockingStore API. Procedure Implement the appropriate Data Grid persistent SPIs. Annotate your store class with the @ConfiguredBy annotation if it has a custom configuration. Create a custom cache store configuration and builder if desired. Extend AbstractStoreConfiguration and AbstractStoreConfigurationBuilder . Optionally add the following annotations to your store Configuration class to ensure that your custom configuration builder parses your cache store configuration from XML: @ConfigurationFor @BuiltBy If you do not add these annotations, then CustomStoreConfigurationBuilder parses the common store attributes defined in AbstractStoreConfiguration and any additional elements are ignored. Note If a configuration does not declare the @ConfigurationFor annotation, a warning message is logged when Data Grid initializes the cache. 6.15.3. Examples of custom cache store configuration The following are examples show how to configure Data Grid with custom cache store implementations: XML <distributed-cache> <persistence> <store class="org.infinispan.persistence.example.MyInMemoryStore" /> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence" : { "store" : { "class" : "org.infinispan.persistence.example.MyInMemoryStore" } } } } YAML distributedCache: persistence: store: class: "org.infinispan.persistence.example.MyInMemoryStore" ConfigurationBuilder Configuration config = new ConfigurationBuilder() .persistence() .addStore(CustomStoreConfigurationBuilder.class) .build(); 6.15.4. Deploying custom cache stores To use your cache store implementation with Data Grid Server, you must provide it with a JAR file. Prerequisites Stop Data Grid Server if it is running. Data Grid loads JAR files at startup only. Procedure Package your custom cache store implementation in a JAR file. Add your JAR file to the server/lib directory of your Data Grid Server installation. 6.16. Migrating data between cache stores Data Grid provides a utility to migrate data from one cache store to another. 6.16.1. Cache store migrator Data Grid provides the StoreMigrator.java utility that recreates data for the latest Data Grid cache store implementations. StoreMigrator takes a cache store from a version of Data Grid as source and uses a cache store implementation as target. When you run StoreMigrator , it creates the target cache with the cache store type that you define using the EmbeddedCacheManager interface. StoreMigrator then loads entries from the source store into memory and then puts them into the target cache. StoreMigrator also lets you migrate data from one type of cache store to another. For example, you can migrate from a JDBC string-based cache store to a RocksDB cache store. Important StoreMigrator cannot migrate data from segmented cache stores to: Non-segmented cache store. Segmented cache stores that have a different number of segments. 6.16.2. Getting the cache store migrator StoreMigrator is available as part of the Data Grid tools library, infinispan-tools , and is included in the Maven repository. Procedure Configure your pom.xml for StoreMigrator as follows: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.infinispan.example</groupId> <artifactId>jdbc-migrator-example</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-tools</artifactId> </dependency> <!-- Additional dependencies --> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <goals> <goal>java</goal> </goals> </execution> </executions> <configuration> <mainClass>org.infinispan.tools.store.migrator.StoreMigrator</mainClass> <arguments> <argument>path/to/migrator.properties</argument> </arguments> </configuration> </plugin> </plugins> </build> </project> 6.16.3. Configuring the cache store migrator Use the migrator.properties file to configure properties for source and target cache stores. Procedure Create a migrator.properties file. Configure properties for source and target cache store using the migrator.properties file. Add the source. prefix to all configuration properties for the source cache store. Example source cache store Important For migrating data from segmented cache stores, you must also configure the number of segments using the source.segment_count property. The number of segments must match clustering.hash.numSegments in your Data Grid configuration. If the number of segments for a cache store does not match the number of segments for the corresponding cache, Data Grid cannot read data from the cache store. Add the target. prefix to all configuration properties for the target cache store. Example target cache store 6.16.3.1. Configuration properties for the cache store migrator Configure source and target cache stores in a StoreMigrator properties. Table 6.2. Cache Store Type Property Property Description Required/Optional type Specifies the type of cache store for a source or target cache store. .type=JDBC_STRING .type=JDBC_BINARY .type=JDBC_MIXED .type=LEVELDB .type=ROCKSDB .type=SINGLE_FILE_STORE .type=SOFT_INDEX_FILE_STORE .type=JDBC_MIXED Required Table 6.3. Common Properties Property Description Example Value Required/Optional cache_name The name of the cache that you want to back up. .cache_name=myCache Required segment_count The number of segments for target cache stores that can use segmentation. The number of segments must match clustering.hash.numSegments in the Data Grid configuration. If the number of segments for a cache store does not match the number of segments for the corresponding cache, Data Grid cannot read data from the cache store. .segment_count=256 Optional Table 6.4. JDBC Properties Property Description Required/Optional dialect Specifies the dialect of the underlying database. Required version Specifies the marshaller version for source cache stores. Set one of the following values: * 8 for Data Grid 7.2.x * 9 for Data Grid 7.3.x * 10 for Data Grid 8.0.x * 11 for Data Grid 8.1.x * 12 for Data Grid 8.2.x * 13 for Data Grid 8.3.x Required for source stores only. marshaller.class Specifies a custom marshaller class. Required if using custom marshallers. marshaller.externalizers Specifies a comma-separated list of custom AdvancedExternalizer implementations to load in this format: [id]:<Externalizer class> Optional connection_pool.connection_url Specifies the JDBC connection URL. Required connection_pool.driver_class Specifies the class of the JDBC driver. Required connection_pool.username Specifies a database username. Required connection_pool.password Specifies a password for the database username. Required db.disable_upsert Disables database upsert. Optional db.disable_indexing Specifies if table indexes are created. Optional table.string.table_name_prefix Specifies additional prefixes for the table name. Optional table.string.<id|data|timestamp>.name Specifies the column name. Required table.string.<id|data|timestamp>.type Specifies the column type. Required key_to_string_mapper Specifies the TwoWayKey2StringMapper class. Optional Note To migrate from Binary cache stores in older Data Grid versions, change table.string.* to table.binary.\* in the following properties: source.table.binary.table_name_prefix source.table.binary.<id\|data\|timestamp>.name source.table.binary.<id\|data\|timestamp>.type Table 6.5. RocksDB Properties Property Description Required/Optional location Sets the database directory. Required compression Specifies the compression type to use. Optional Table 6.6. SingleFileStore Properties Property Description Required/Optional location Sets the directory that contains the cache store .dat file. Required Table 6.7. SoftIndexFileStore Properties Property Description Value Required/Optional location Sets the database directory. Required index_location Sets the database index directory. 6.16.4. Migrating Data Grid cache stores You can use the StoreMigrator to migrate data between cache stores with different Data Grid versions or to migrate data from one type of cache store to another. Prerequisites Have a infinispan-tools.jar . Have the source and target cache store configured in the migrator.properties file. Procedure If you built the infinispan-tools.jar from the source code, do the following: Add infinispan-tools.jar to your classpath. Add dependencies for your source and target databases, such as JDBC drivers to your classpath. Specify migrator.properties file as an argument for StoreMigrator . If you pulled infinispan-tools.jar from the Maven repository, run the following command: mvn exec:java | [
"<distributed-cache> <persistence passivation=\"false\"> <file-store> <index path=\"path/to/index\" /> <data path=\"path/to/data\" /> </file-store> </persistence> </distributed-cache>",
"<distributed-cache> <persistence> <table-jdbc-store xmlns=\"urn:infinispan:config:store:sql:14.0\" dialect=\"H2\" shared=\"true\" table-name=\"books\"> <connection-pool connection-url=\"jdbc:h2:mem:infinispan\" username=\"sa\" password=\"changeme\" driver=\"org.h2.Driver\"/> <write-behind modification-queue-size=\"2048\" fail-silently=\"true\"/> </table-jdbc-store> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\" : { \"table-jdbc-store\": { \"dialect\": \"H2\", \"shared\": \"true\", \"table-name\": \"books\", \"connection-pool\": { \"connection-url\": \"jdbc:h2:mem:infinispan\", \"driver\": \"org.h2.Driver\", \"username\": \"sa\", \"password\": \"changeme\" }, \"write-behind\" : { \"modification-queue-size\" : \"2048\", \"fail-silently\" : true } } } } }",
"distributedCache: persistence: tableJdbcStore: dialect: \"H2\" shared: \"true\" tableName: \"books\" connectionPool: connectionUrl: \"jdbc:h2:mem:infinispan\" driver: \"org.h2.Driver\" username: \"sa\" password: \"changeme\" writeBehind: modificationQueueSize: \"2048\" failSilently: \"true\"",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence() .async() .modificationQueueSize(2048) .failSilently(true);",
"<persistence> <store shared=\"false\" purge=\"true\"/> </persistence>",
"<persistence> <store shared=\"true\" purge=\"false\"/> </persistence>",
"ISPN000558: \"The store location 'foo' is not a child of the global persistent location 'bar'\"",
"<infinispan> <cache-container> <global-state> <persistent-location path=\"global/state\" relative-to=\"my.data\"/> </global-state> </cache-container> </infinispan>",
"{ \"infinispan\" : { \"cache-container\" : { \"global-state\": { \"persistent-location\" : { \"path\" : \"global/state\", \"relative-to\" : \"my.data\" } } } } }",
"cacheContainer: globalState: persistentLocation: path: \"global/state\" relativeTo : \"my.data\"",
"new GlobalConfigurationBuilder().globalState() .enable() .persistentLocation(\"global/state\", \"my.data\");",
"<distributed-cache> <persistence passivation=\"true\"> <file-store shared=\"false\"> <data path=\"data\"/> <index path=\"index\"/> <write-behind modification-queue-size=\"2048\" /> </file-store> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\": { \"passivation\": true, \"file-store\" : { \"shared\": false, \"data\": { \"path\": \"data\" }, \"index\": { \"path\": \"index\" }, \"write-behind\": { \"modification-queue-size\": \"2048\" } } } } }",
"distributedCache: persistence: passivation: \"true\" fileStore: shared: \"false\" data: path: \"data\" index: path: \"index\" writeBehind: modificationQueueSize: \"2048\"",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().passivation(true) .addSoftIndexFileStore() .shared(false) .dataLocation(\"data\") .indexLocation(\"index\") .modificationQueueSize(2048);",
"<distributed-cache> <persistence passivation=\"true\"> <single-file-store shared=\"false\" preload=\"true\"/> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\" : { \"passivation\" : true, \"single-file-store\" : { \"shared\" : false, \"preload\" : true } } } }",
"distributedCache: persistence: passivation: \"true\" singleFileStore: shared: \"false\" preload: \"true\"",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().passivation(true) .addStore(SingleFileStoreConfigurationBuilder.class) .shared(false) .preload(true);",
"<distributed-cache> <persistence> <connection-pool connection-url=\"jdbc:h2:mem:infinispan;DB_CLOSE_DELAY=-1\" username=\"sa\" password=\"changeme\" driver=\"org.h2.Driver\"/> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\": { \"connection-pool\": { \"connection-url\": \"jdbc:h2:mem:infinispan_string_based\", \"driver\": \"org.h2.Driver\", \"username\": \"sa\", \"password\": \"changeme\" } } } }",
"distributedCache: persistence: connectionPool: connectionUrl: \"jdbc:h2:mem:infinispan_string_based;DB_CLOSE_DELAY=-1\" driver: org.h2.Driver username: sa password: changeme",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence() .connectionPool() .connectionUrl(\"jdbc:h2:mem:infinispan_string_based;DB_CLOSE_DELAY=-1\") .username(\"sa\") .driverClass(\"org.h2.Driver\");",
"<distributed-cache> <persistence> <data-source jndi-url=\"java:/StringStoreWithManagedConnectionTest/DS\" /> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\": { \"data-source\": { \"jndi-url\": \"java:/StringStoreWithManagedConnectionTest/DS\" } } } }",
"distributedCache: persistence: dataSource: jndiUrl: \"java:/StringStoreWithManagedConnectionTest/DS\"",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence() .dataSource() .jndiUrl(\"java:/StringStoreWithManagedConnectionTest/DS\");",
"<distributed-cache> <persistence> <simple-connection connection-url=\"jdbc:h2://localhost\" username=\"sa\" password=\"changeme\" driver=\"org.h2.Driver\"/> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\": { \"simple-connection\": { \"connection-url\": \"jdbc:h2://localhost\", \"driver\": \"org.h2.Driver\", \"username\": \"sa\", \"password\": \"changeme\" } } } }",
"distributedCache: persistence: simpleConnection: connectionUrl: \"jdbc:h2://localhost\" driver: org.h2.Driver username: sa password: changeme",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence() .simpleConnection() .connectionUrl(\"jdbc:h2://localhost\") .driverClass(\"org.h2.Driver\") .username(\"admin\") .password(\"changeme\");",
"install org.postgresql:postgresql:42.4.3",
"bin/cli.sh",
"server datasource ls",
"server datasource test my-datasource",
"<server xmlns=\"urn:infinispan:server:14.0\"> <data-sources> <!-- Defines a unique name for the datasource and JNDI name that you reference in JDBC cache store configuration. Enables statistics for the datasource, if required. --> <data-source name=\"ds\" jndi-name=\"jdbc/postgres\" statistics=\"true\"> <!-- Specifies the JDBC driver that creates connections. --> <connection-factory driver=\"org.postgresql.Driver\" url=\"jdbc:postgresql://localhost:5432/postgres\" username=\"postgres\" password=\"changeme\"> <!-- Sets optional JDBC driver-specific connection properties. --> <connection-property name=\"name\">value</connection-property> </connection-factory> <!-- Defines connection pool tuning properties. --> <connection-pool initial-size=\"1\" max-size=\"10\" min-size=\"3\" background-validation=\"1000\" idle-removal=\"1\" blocking-timeout=\"1000\" leak-detection=\"10000\"/> </data-source> </data-sources> </server>",
"{ \"server\": { \"data-sources\": [{ \"name\": \"ds\", \"jndi-name\": \"jdbc/postgres\", \"statistics\": true, \"connection-factory\": { \"driver\": \"org.postgresql.Driver\", \"url\": \"jdbc:postgresql://localhost:5432/postgres\", \"username\": \"postgres\", \"password\": \"changeme\", \"connection-properties\": { \"name\": \"value\" } }, \"connection-pool\": { \"initial-size\": 1, \"max-size\": 10, \"min-size\": 3, \"background-validation\": 1000, \"idle-removal\": 1, \"blocking-timeout\": 1000, \"leak-detection\": 10000 } }] } }",
"server: dataSources: - name: ds jndiName: 'jdbc/postgres' statistics: true connectionFactory: driver: \"org.postgresql.Driver\" url: \"jdbc:postgresql://localhost:5432/postgres\" username: \"postgres\" password: \"changeme\" connectionProperties: name: value connectionPool: initialSize: 1 maxSize: 10 minSize: 3 backgroundValidation: 1000 idleRemoval: 1 blockingTimeout: 1000 leakDetection: 10000",
"<distributed-cache> <persistence> <jdbc:string-keyed-jdbc-store> <!-- Specifies the JNDI name of a managed datasource on Data Grid Server. --> <jdbc:data-source jndi-url=\"jdbc/postgres\"/> <jdbc:string-keyed-table drop-on-exit=\"true\" create-on-start=\"true\" prefix=\"TBL\"> <jdbc:id-column name=\"ID\" type=\"VARCHAR(255)\"/> <jdbc:data-column name=\"DATA\" type=\"BYTEA\"/> <jdbc:timestamp-column name=\"TS\" type=\"BIGINT\"/> <jdbc:segment-column name=\"S\" type=\"INT\"/> </jdbc:string-keyed-table> </jdbc:string-keyed-jdbc-store> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\": { \"string-keyed-jdbc-store\": { \"data-source\": { \"jndi-url\": \"jdbc/postgres\" }, \"string-keyed-table\": { \"prefix\": \"TBL\", \"drop-on-exit\": true, \"create-on-start\": true, \"id-column\": { \"name\": \"ID\", \"type\": \"VARCHAR(255)\" }, \"data-column\": { \"name\": \"DATA\", \"type\": \"BYTEA\" }, \"timestamp-column\": { \"name\": \"TS\", \"type\": \"BIGINT\" }, \"segment-column\": { \"name\": \"S\", \"type\": \"INT\" } } } } } }",
"distributedCache: persistence: stringKeyedJdbcStore: dataSource: jndi-url: \"jdbc/postgres\" stringKeyedTable: prefix: \"TBL\" dropOnExit: true createOnStart: true idColumn: name: \"ID\" type: \"VARCHAR(255)\" dataColumn: name: \"DATA\" type: \"BYTEA\" timestampColumn: name: \"TS\" type: \"BIGINT\" segmentColumn: name: \"S\" type: \"INT\"",
"org.infinispan.agroal.metricsEnabled=false org.infinispan.agroal.minSize=10 org.infinispan.agroal.maxSize=100 org.infinispan.agroal.initialSize=20 org.infinispan.agroal.acquisitionTimeout_s=1 org.infinispan.agroal.validationTimeout_m=1 org.infinispan.agroal.leakTimeout_s=10 org.infinispan.agroal.reapTimeout_m=10 org.infinispan.agroal.metricsEnabled=false org.infinispan.agroal.autoCommit=true org.infinispan.agroal.jdbcTransactionIsolation=READ_COMMITTED org.infinispan.agroal.jdbcUrl=jdbc:h2:mem:PooledConnectionFactoryTest;DB_CLOSE_DELAY=-1 org.infinispan.agroal.driverClassName=org.h2.Driver.class org.infinispan.agroal.principal=sa org.infinispan.agroal.credential=sa",
"<connection-pool properties-file=\"path/to/agroal.properties\"/>",
"\"persistence\": { \"connection-pool\": { \"properties-file\": \"path/to/agroal.properties\" } }",
"persistence: connectionPool: propertiesFile: path/to/agroal.properties",
".connectionPool().propertyFile(\"path/to/agroal.properties\")",
"CREATE TABLE books ( isbn NUMBER(13), title varchar(120) PRIMARY KEY(isbn) );",
"CREATE TABLE books ( isbn NUMBER(13), title varchar(120), author varchar(80) PRIMARY KEY(isbn) );",
"package library; message books_value { optional string title = 1; optional string author = 2; }",
"CREATE TABLE books ( isbn NUMBER(13), reprint INT, title varchar(120), author varchar(80) PRIMARY KEY(isbn, reprint) );",
"package library; message books_key { required string isbn = 1; required int32 reprint = 2; } message books_value { optional string title = 1; optional string author = 2; }",
"package library; message books_key { required string isbn = 1; required int32 reprint = 2; } message books_value { required string isbn = 1; required string reprint = 2; optional string title = 3; optional string author = 4; }",
"install org.postgresql:postgresql:42.4.3",
"<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-cachestore-sql</artifactId> </dependency>",
"table-jdbc-store xmlns=\"urn:infinispan:config:store:sql:14.0\"",
"persistence().addStore(TableJdbcStoreConfigurationBuilder.class)",
"<distributed-cache> <persistence> <table-jdbc-store xmlns=\"urn:infinispan:config:store:sql:14.0\" dialect=\"H2\" shared=\"true\" table-name=\"books\"> <schema message-name=\"books_value\" package=\"library\"/> </table-jdbc-store> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\": { \"table-jdbc-store\": { \"dialect\": \"H2\", \"shared\": \"true\", \"table-name\": \"books\", \"schema\": { \"message-name\": \"books_value\", \"package\": \"library\" } } } } }",
"distributedCache: persistence: tableJdbcStore: dialect: \"H2\" shared: \"true\" tableName: \"books\" schema: messageName: \"books_value\" package: \"library\"",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().addStore(TableJdbcStoreConfigurationBuilder.class) .dialect(DatabaseType.H2) .shared(\"true\") .tableName(\"books\") .schemaJdbcConfigurationBuilder() .messageName(\"books_value\") .packageName(\"library\");",
"install org.postgresql:postgresql:42.4.3",
"<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-cachestore-sql</artifactId> </dependency>",
"query-jdbc-store xmlns=\"urn:infinispan:config:store:sql:14.0\"",
"persistence().addStore(QueriesJdbcStoreConfigurationBuilder.class)",
"CREATE TABLE Person ( name VARCHAR(255) NOT NULL, picture BYTEA, sex VARCHAR(255), birthdate TIMESTAMP, accepted_tos BOOLEAN, notused VARCHAR(255), PRIMARY KEY (name) );",
"CREATE TABLE Address ( name VARCHAR(255) NOT NULL, street VARCHAR(255), city VARCHAR(255), zip INT, PRIMARY KEY (name) );",
"package com.example; message Address { optional string street = 1; optional string city = 2 [default = \"San Jose\"]; optional int32 zip = 3 [default = 0]; }",
"package com.example; import \"/path/to/address.proto\"; enum Sex { FEMALE = 1; MALE = 2; } message Person { optional string name = 1; optional Address address = 2; optional bytes picture = 3; optional Sex sex = 4; optional fixed64 birthDate = 5 [default = 0]; optional bool accepted_tos = 6 [default = false]; }",
"<distributed-cache> <persistence> <query-jdbc-store xmlns=\"urn:infinispan:config:store:sql:14.0\" dialect=\"POSTGRES\" shared=\"true\" key-columns=\"name\"> <connection-pool driver=\"org.postgresql.Driver\" connection-url=\"jdbc:postgresql://localhost:5432/postgres\" username=\"postgres\" password=\"changeme\"/> <queries select-single=\"SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name\" select-all=\"SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name\" delete-single=\"DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name\" delete-all=\"DELETE FROM Person; DELETE FROM Address\" upsert=\"INSERT INTO Person (name, picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)\" size=\"SELECT COUNT(*) FROM Person\" /> <schema message-name=\"Person\" package=\"com.example\" embedded-key=\"true\"/> </query-jdbc-store> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\": { \"query-jdbc-store\": { \"dialect\": \"POSTGRES\", \"shared\": \"true\", \"key-columns\": \"name\", \"connection-pool\": { \"username\": \"postgres\", \"password\": \"changeme\", \"driver\": \"org.postgresql.Driver\", \"connection-url\": \"jdbc:postgresql://localhost:5432/postgres\" }, \"queries\": { \"select-single\": \"SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name\", \"select-all\": \"SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name\", \"delete-single\": \"DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name\", \"delete-all\": \"DELETE FROM Person; DELETE FROM Address\", \"upsert\": \"INSERT INTO Person (name, picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)\", \"size\": \"SELECT COUNT(*) FROM Person\" }, \"schema\": { \"message-name\": \"Person\", \"package\": \"com.example\", \"embedded-key\": \"true\" } } } } }",
"distributedCache: persistence: queryJdbcStore: dialect: \"POSTGRES\" shared: \"true\" keyColumns: \"name\" connectionPool: username: \"postgres\" password: \"changeme\" driver: \"org.postgresql.Driver\" connectionUrl: \"jdbc:postgresql://localhost:5432/postgres\" queries: selectSingle: \"SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name\" selectAll: \"SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name\" deleteSingle: \"DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name\" deleteAll: \"DELETE FROM Person; DELETE FROM Address\" upsert: \"INSERT INTO Person (name, picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)\" size: \"SELECT COUNT(*) FROM Person\" schema: messageName: \"Person\" package: \"com.example\" embeddedKey: \"true\"",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().addStore(QueriesJdbcStoreConfigurationBuilder.class) .dialect(DatabaseType.POSTGRES) .shared(\"true\") .keyColumns(\"name\") .queriesJdbcConfigurationBuilder() .select(\"SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = :name AND t2.name = :name\") .selectAll(\"SELECT t1.name, t1.picture, t1.sex, t1.birthdate, t1.accepted_tos, t2.street, t2.city, t2.zip FROM Person t1 JOIN Address t2 ON t1.name = t2.name\") .delete(\"DELETE FROM Person t1 WHERE t1.name = :name; DELETE FROM Address t2 where t2.name = :name\") .deleteAll(\"DELETE FROM Person; DELETE FROM Address\") .upsert(\"INSERT INTO Person (name, picture, sex, birthdate, accepted_tos) VALUES (:name, :picture, :sex, :birthdate, :accepted_tos); INSERT INTO Address(name, street, city, zip) VALUES (:name, :street, :city, :zip)\") .size(\"SELECT COUNT(*) FROM Person\") .schemaJdbcConfigurationBuilder() .messageName(\"Person\") .packageName(\"com.example\") .embeddedKey(true);",
"ISPN008064: No primary keys found for table <table_name>, check case sensitivity",
"<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-cachestore-jdbc</artifactId> </dependency>",
"xmlns=\"urn:infinispan:config:store:jdbc:14.0\"",
"persistence().addStore(JdbcStringBasedStoreConfigurationBuilder.class)",
"<distributed-cache> <persistence> <string-keyed-jdbc-store xmlns=\"urn:infinispan:config:store:jdbc:14.0\" dialect=\"H2\"> <connection-pool connection-url=\"jdbc:h2:mem:infinispan\" username=\"sa\" password=\"changeme\" driver=\"org.h2.Driver\"/> <string-keyed-table create-on-start=\"true\" prefix=\"ISPN_STRING_TABLE\"> <id-column name=\"ID_COLUMN\" type=\"VARCHAR(255)\" /> <data-column name=\"DATA_COLUMN\" type=\"BINARY\" /> <timestamp-column name=\"TIMESTAMP_COLUMN\" type=\"BIGINT\" /> <segment-column name=\"SEGMENT_COLUMN\" type=\"INT\"/> </string-keyed-table> </string-keyed-jdbc-store> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\": { \"string-keyed-jdbc-store\": { \"dialect\": \"H2\", \"string-keyed-table\": { \"prefix\": \"ISPN_STRING_TABLE\", \"create-on-start\": true, \"id-column\": { \"name\": \"ID_COLUMN\", \"type\": \"VARCHAR(255)\" }, \"data-column\": { \"name\": \"DATA_COLUMN\", \"type\": \"BINARY\" }, \"timestamp-column\": { \"name\": \"TIMESTAMP_COLUMN\", \"type\": \"BIGINT\" }, \"segment-column\": { \"name\": \"SEGMENT_COLUMN\", \"type\": \"INT\" } }, \"connection-pool\": { \"connection-url\": \"jdbc:h2:mem:infinispan\", \"driver\": \"org.h2.Driver\", \"username\": \"sa\", \"password\": \"changeme\" } } } } }",
"distributedCache: persistence: stringKeyedJdbcStore: dialect: \"H2\" stringKeyedTable: prefix: \"ISPN_STRING_TABLE\" createOnStart: true idColumn: name: \"ID_COLUMN\" type: \"VARCHAR(255)\" dataColumn: name: \"DATA_COLUMN\" type: \"BINARY\" timestampColumn: name: \"TIMESTAMP_COLUMN\" type: \"BIGINT\" segmentColumn: name: \"SEGMENT_COLUMN\" type: \"INT\" connectionPool: connectionUrl: \"jdbc:h2:mem:infinispan\" driver: \"org.h2.Driver\" username: \"sa\" password: \"changeme\"",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.persistence().addStore(JdbcStringBasedStoreConfigurationBuilder.class) .dialect(DatabaseType.H2) .table() .dropOnExit(true) .createOnStart(true) .tableNamePrefix(\"ISPN_STRING_TABLE\") .idColumnName(\"ID_COLUMN\").idColumnType(\"VARCHAR(255)\") .dataColumnName(\"DATA_COLUMN\").dataColumnType(\"BINARY\") .timestampColumnName(\"TIMESTAMP_COLUMN\").timestampColumnType(\"BIGINT\") .segmentColumnName(\"SEGMENT_COLUMN\").segmentColumnType(\"INT\") .connectionPool() .connectionUrl(\"jdbc:h2:mem:infinispan\") .username(\"sa\") .password(\"changeme\") .driverClass(\"org.h2.Driver\");",
"<property name=\"database.max_background_compactions\">2</property> <property name=\"data.write_buffer_size\">64MB</property> <property name=\"data.compression_per_level\">kNoCompression:kNoCompression:kNoCompression:kSnappyCompression:kZSTD:kZSTD</property>",
"<local-cache> <persistence> <rocksdb-store xmlns=\"urn:infinispan:config:store:rocksdb:14.0\" path=\"rocksdb/data\"> <expiration path=\"rocksdb/expired\"/> </rocksdb-store> </persistence> </local-cache>",
"{ \"local-cache\": { \"persistence\": { \"rocksdb-store\": { \"path\": \"rocksdb/data\", \"expiration\": { \"path\": \"rocksdb/expired\" } } } } }",
"localCache: persistence: rocksdbStore: path: \"rocksdb/data\" expiration: path: \"rocksdb/expired\"",
"Configuration cacheConfig = new ConfigurationBuilder().persistence() .addStore(RocksDBStoreConfigurationBuilder.class) .build(); EmbeddedCacheManager cacheManager = new DefaultCacheManager(cacheConfig); Cache<String, User> usersCache = cacheManager.getCache(\"usersCache\"); usersCache.put(\"raytsang\", new User(...));",
"Properties props = new Properties(); props.put(\"database.max_background_compactions\", \"2\"); props.put(\"data.write_buffer_size\", \"512MB\"); Configuration cacheConfig = new ConfigurationBuilder().persistence() .addStore(RocksDBStoreConfigurationBuilder.class) .location(\"rocksdb/data\") .expiredLocation(\"rocksdb/expired\") .properties(props) .build();",
"<distributed-cache> <persistence> <remote-store xmlns=\"urn:infinispan:config:store:remote:14.0\" cache=\"mycache\" raw-values=\"true\"> <remote-server host=\"one\" port=\"12111\" /> <remote-server host=\"two\" /> <connection-pool max-active=\"10\" exhausted-action=\"CREATE_NEW\" /> </remote-store> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"remote-store\": { \"cache\": \"mycache\", \"raw-values\": \"true\", \"remote-server\": [ { \"host\": \"one\", \"port\": \"12111\" }, { \"host\": \"two\" } ], \"connection-pool\": { \"max-active\": \"10\", \"exhausted-action\": \"CREATE_NEW\" } } } }",
"distributedCache: remoteStore: cache: \"mycache\" rawValues: \"true\" remoteServer: - host: \"one\" port: \"12111\" - host: \"two\" connectionPool: maxActive: \"10\" exhaustedAction: \"CREATE_NEW\"",
"ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence().addStore(RemoteStoreConfigurationBuilder.class) .ignoreModifications(false) .purgeOnStartup(false) .remoteCacheName(\"mycache\") .rawValues(true) .addServer() .host(\"one\").port(12111) .addServer() .host(\"two\") .connectionPool() .maxActive(10) .exhaustedAction(ExhaustedAction.CREATE_NEW) .async().enable();",
"<distributed-cache> <persistence> <cluster-loader preload=\"true\" remote-timeout=\"500\"/> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\" : { \"cluster-loader\" : { \"preload\" : true, \"remote-timeout\" : \"500\" } } } }",
"distributedCache: persistence: clusterLoader: preload: \"true\" remoteTimeout: \"500\"",
"ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addClusterLoader() .remoteCallTimeout(500);",
"<distributed-cache> <persistence> <store class=\"org.infinispan.persistence.example.MyInMemoryStore\" /> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\" : { \"store\" : { \"class\" : \"org.infinispan.persistence.example.MyInMemoryStore\" } } } }",
"distributedCache: persistence: store: class: \"org.infinispan.persistence.example.MyInMemoryStore\"",
"Configuration config = new ConfigurationBuilder() .persistence() .addStore(CustomStoreConfigurationBuilder.class) .build();",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>org.infinispan.example</groupId> <artifactId>jdbc-migrator-example</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-tools</artifactId> </dependency> <!-- Additional dependencies --> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <goals> <goal>java</goal> </goals> </execution> </executions> <configuration> <mainClass>org.infinispan.tools.store.migrator.StoreMigrator</mainClass> <arguments> <argument>path/to/migrator.properties</argument> </arguments> </configuration> </plugin> </plugins> </build> </project>",
"source.type=SOFT_INDEX_FILE_STORE source.cache_name=myCache source.location=/path/to/source/sifs source.version=<version>",
"target.type=SINGLE_FILE_STORE target.cache_name=myCache target.location=/path/to/target/sfs.dat",
"Example configuration for migrating to a JDBC String-Based cache store target.type=STRING target.cache_name=myCache target.dialect=POSTGRES target.marshaller.class=org.example.CustomMarshaller target.marshaller.externalizers=25:Externalizer1,org.example.Externalizer2 target.connection_pool.connection_url=jdbc:postgresql:postgres target.connection_pool.driver_class=org.postrgesql.Driver target.connection_pool.username=postgres target.connection_pool.password=redhat target.db.disable_upsert=false target.db.disable_indexing=false target.table.string.table_name_prefix=tablePrefix target.table.string.id.name=id_column target.table.string.data.name=datum_column target.table.string.timestamp.name=timestamp_column target.table.string.id.type=VARCHAR target.table.string.data.type=bytea target.table.string.timestamp.type=BIGINT target.key_to_string_mapper=org.infinispan.persistence.keymappers. DefaultTwoWayKey2StringMapper",
"Example configuration for migrating from a RocksDB cache store. source.type=ROCKSDB source.cache_name=myCache source.location=/path/to/rocksdb/database source.compression=SNAPPY",
"Example configuration for migrating to a Single File cache store. target.type=SINGLE_FILE_STORE target.cache_name=myCache target.location=/path/to/sfs.dat",
"Example configuration for migrating to a Soft-Index File cache store. target.type=SOFT_INDEX_FILE_STORE target.cache_name=myCache target.location=path/to/sifs/database target.location=path/to/sifs/index",
"mvn exec:java"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/configuring_data_grid_caches/persistence |
Chapter 39. Using ID views for Active Directory users | Chapter 39. Using ID views for Active Directory users You can use ID views to specify new values for the POSIX attributes of your Active Directory (AD) users in an IdM-AD Trust environment. By default, IdM applies the Default Trust View to all AD users. You can configure additional ID views on individual IdM clients to further adjust which POSIX attributes specific users receive. 39.1. How the Default Trust View works The Default Trust View is the default ID view that is always applied to AD users and groups in trust-based setups. It is created automatically when you establish the trust using the ipa-adtrust-install command and cannot be deleted. Note The Default Trust View only accepts overrides for AD users and groups, not for IdM users and groups. Using the Default Trust View, you can define custom POSIX attributes for AD users and groups, thus overriding the values defined in AD. Table 39.1. Applying the Default Trust View Values in AD Default Trust View Result Login ad_user ad_user ad_user UID 111 222 222 GID 111 (no value) 111 You can also configure additional ID Views to override the Default Trust View on IdM clients. IdM applies the values from the host-specific ID view on top of the Default Trust View: If an attribute is defined in the host-specific ID view, IdM applies the value from this ID view. If an attribute is not defined in the host-specific ID view, IdM applies the value from the Default Trust View. Table 39.2. Applying a host-specific ID view on top of the Default Trust View Values in AD Default Trust View Host-specific ID view Result Login ad_user ad_user (no value) ad_user UID 111 222 333 333 GID 111 (no value) 333 333 Note You can only apply host-specific ID views to override the Default Trust View on IdM clients. IdM servers and replicas always apply the values from the Default Trust View. Additional resources Using an ID view to override a user attribute value on an IdM client 39.2. Defining global attributes for an AD user by modifying the Default Trust View If you want to override a POSIX attribute for an Active Directory (AD) user throughout your entire IdM deployment, modify the entry for that user in the Default Trust View. This procedure sets the GID for the AD user [email protected] to 732000006. Prerequisites You have authenticated as an IdM administrator. A group must exist with the GID or you must set the GID in an ID override for a group. Procedure As an IdM administrator, create an ID override for the AD user in the Default Trust View that changes the GID number to 732000006: Clear the entry for the [email protected] user from the SSSD cache on all IdM servers and clients. This removes stale data and allows the new override value to apply. Verification Retrieve information for the [email protected] user to verify the GID reflects the updated value. 39.3. Overriding Default Trust View attributes for an AD user on an IdM client with an ID view You might want to override some POSIX attributes from the Default Trust View for an Active Directory (AD) user. For example, you might need to give an AD user a different GID on one particular IdM client. You can use an ID view to override a value from the Default Trust View for an AD user and apply it to a single host. This procedure explains how to set the GID for the [email protected] AD user on the host1.idm.example.com IdM client to 732001337. Prerequisites You have root access to the host1.idm.example.com IdM client. You are logged in as a user with the required privileges, for example the admin user. Procedure Create an ID view. For example, to create an ID view named example_for_host1 : Add a user override to the example_for_host1 ID view. To override the user's GID: Enter the ipa idoverrideuser-add command Add the name of the ID view Add the user name, also called the anchor Add the --gidnumber= option: Apply example_for_host1 to the host1.idm.example.com IdM client: Note The ipa idview-apply command also accepts the --hostgroups option. The option applies the ID view to hosts that belong to the specified host group, but does not associate the ID view with the host group itself. Instead, the --hostgroups option expands the members of the specified host group and applies the --hosts option individually to every one of them. This means that if a host is added to the host group in the future, the ID view does not apply to the new host. Clear the entry for the [email protected] user from the SSSD cache on the host1.idm.example.com IdM client. This removes stale data and allows the new override value to apply. Verification SSH to host1 as [email protected] : Retrieve information for the [email protected] user to verify the GID reflects the updated value. 39.4. Applying an ID view to an IdM host group The ipa idview-apply command accepts the --hostgroups option. However, the option acts as a one-time operation that applies the ID view to hosts that currently belong to the specified host group, but does not dynamically associate the ID view with the host group itself. The --hostgroups option expands the members of the specified host group and applies the --hosts option individually to every one of them. If you add a new host to the host group later, you must apply the ID view to the new host manually, using the ipa idview-apply command with the --hosts option. Similarly, if you remove a host from a host group, the ID view is still assigned to the host after the removal. To unapply the ID view from the removed host, you must run the ipa idview-unapply id_view_name --hosts= name_of_the_removed_host command. Follow this procedure to achieve the following goals: How to create a host group and add hosts to it. How to apply an ID view to the host group. How to add a new host to the host group and apply the ID view to the new host. Prerequisites Ensure that the ID view you want to apply to the host group exists in IdM. For example, to create an ID view to override the GID for an AD user, see Overriding Default Trust View attributes for an AD user on an IdM client with an ID view Procedure Create a host group and add hosts to it: Create a host group. For example, to create a host group named baltimore : Add hosts to the host group. For example, to add the host102 and host103 to the baltimore host group: Apply an ID view to the hosts in the host group. For example, to apply the example_for_host1 ID view to the baltimore host group: Add a new host to the host group and apply the ID view to the new host: Add a new host to the host group. For example, to add the somehost.idm.example.com host to the baltimore host group: Optional: Display the ID view information. For example, to display the details about the example_for_host1 ID view: The output shows that the ID view is not applied to somehost.idm.example.com , the newly-added host in the baltimore host group. Apply the ID view to the new host. For example, to apply the example_for_host1 ID view to somehost.idm.example.com : Verification Display the ID view information again: The output shows that ID view is now applied to somehost.idm.example.com , the newly-added host in the baltimore host group. | [
"ipa idoverrideuser-add 'Default Trust View' [email protected] --gidnumber=732000006",
"sssctl cache-expire -u [email protected]",
"id [email protected] uid=702801456([email protected]) gid=732000006(ad_admins) groups=732000006(ad_admins),702800513(domain [email protected])",
"ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1",
"ipa idoverrideuser-add example_for_host1 [email protected] --gidnumber=732001337 ----------------------------- Added User ID override \"[email protected]\" ----------------------------- Anchor to override: [email protected] GID: 732001337",
"ipa idview-apply example_for_host1 --hosts=host1.idm.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"sssctl cache-expire -u [email protected]",
"ssh [email protected]@host1.idm.example.com",
"[[email protected]@host1 ~]USD id [email protected] uid=702801456([email protected]) gid=732001337(admins2) groups=732001337(admins2),702800513(domain [email protected])",
"ipa hostgroup-add --desc=\"Baltimore hosts\" baltimore --------------------------- Added hostgroup \"baltimore\" --------------------------- Host-group: baltimore Description: Baltimore hosts",
"ipa hostgroup-add-member --hosts={host102,host103} baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com ------------------------- Number of members added 2 -------------------------",
"ipa idview-apply --hostgroups=baltimore ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: host102.idm.example.com, host103.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 2 ---------------------------------------------",
"ipa hostgroup-add-member --hosts=somehost.idm.example.com baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com,somehost.idm.example.com ------------------------- Number of members added 1 -------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com objectclass: ipaIDView, top, nsContainer",
"ipa idview-apply --host=somehost.idm.example.com ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: somehost.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com, somehost.idm.example.com objectclass: ipaIDView, top, nsContainer"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/assembly_using-id-views-for-active-directory-users_managing-users-groups-hosts |
Using JDK Flight Recorder with Red Hat build of OpenJDK | Using JDK Flight Recorder with Red Hat build of OpenJDK Red Hat build of OpenJDK 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/index |
Getting Started Guide | Getting Started Guide Red Hat Single Sign-On 7.4 For Use with Red Hat Single Sign-On 7.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/getting_started_guide/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/making-open-source-more-inclusive |
Monitoring APIs | Monitoring APIs OpenShift Container Platform 4.18 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/monitoring_apis/index |
Chapter 5. Enabling alert routing for user-defined projects | Chapter 5. Enabling alert routing for user-defined projects In Red Hat OpenShift Service on AWS, an administrator can enable alert routing for user-defined projects. This process consists of the following steps: Enable alert routing for user-defined projects to use a separate Alertmanager instance. Grant users permission to configure alert routing for user-defined projects. After you complete these steps, developers and other users can configure custom alerts and alert routing for their user-defined projects. 5.1. Understanding alert routing for user-defined projects As a dedicated-admin , you can enable alert routing for user-defined projects. With this feature, you can allow users with the alert-routing-edit cluster role to configure alert notification routing and receivers for user-defined projects. These notifications are routed by an Alertmanager instance dedicated to user-defined monitoring. Users can then create and configure user-defined alert routing by creating or editing the AlertmanagerConfig objects for their user-defined projects without the help of an administrator. After a user has defined alert routing for a user-defined project, user-defined alert notifications are routed to the alertmanager-user-workload pods in the openshift-user-workload-monitoring namespace. Note Review the following limitations of alert routing for user-defined projects: For user-defined alerting rules, user-defined routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace ns1 only applies to PrometheusRules resources in the same namespace. When a namespace is excluded from user-defined monitoring, AlertmanagerConfig resources in the namespace cease to be part of the Alertmanager configuration. 5.2. Enabling a separate Alertmanager instance for user-defined alert routing In Red Hat OpenShift Service on AWS, you may want to deploy a dedicated Alertmanager instance for user-defined projects, which provides user-defined alerts separate from default platform alerts. In these cases, you can optionally enable a separate instance of Alertmanager to send alerts for user-defined projects only. Prerequisites You have access to the cluster as a user with the dedicated-admin role. The user-workload-monitoring-config ConfigMap object exists. This object is created by default when the cluster is created. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add enabled: true and enableAlertmanagerConfig: true in the alertmanager section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2 1 Set the enabled value to true to enable a dedicated instance of the Alertmanager for user-defined projects in a cluster. Set the value to false or omit the key entirely to disable the Alertmanager for user-defined projects. If you set this value to false or if the key is omitted, user-defined alerts are routed to the default platform Alertmanager instance. 2 Set the enableAlertmanagerConfig value to true to enable users to define their own alert routing configurations with AlertmanagerConfig objects. Save the file to apply the changes. The dedicated instance of Alertmanager for user-defined projects starts automatically. Verification Verify that the alert-manager-user-workload pods are running: # oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE alertmanager-user-workload-0 6/6 Running 0 38s alertmanager-user-workload-1 6/6 Running 0 38s ... 5.3. Granting users permission to configure alert routing for user-defined projects You can grant users permission to configure alert routing for user-defined projects. Prerequisites You have access to the cluster as a user with the dedicated-admin role. The user-workload-monitoring-config ConfigMap object exists. This object is created by default when the cluster is created. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the alert-routing-edit cluster role to a user in the user-defined project: USD oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1 1 For <namespace> , substitute the namespace for the user-defined project, such as ns1 . For <user> , substitute the username for the account to which you want to assign the role. Additional resources Configuring alert routing for user-defined projects | [
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE alertmanager-user-workload-0 6/6 Running 0 38s alertmanager-user-workload-1 6/6 Running 0 38s",
"oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/monitoring/enabling-alert-routing-for-user-defined-projects |
Appendix F. Object Storage Daemon (OSD) configuration options | Appendix F. Object Storage Daemon (OSD) configuration options The following are Ceph Object Storage Daemon (OSD) configuration options that can be set during deployment. You can set these configuration options with the ceph config set osd CONFIGURATION_OPTION VALUE command. osd_uuid Description The universally unique identifier (UUID) for the Ceph OSD. Type UUID Default The UUID. Note The osd uuid applies to a single Ceph OSD. The fsid applies to the entire cluster. osd_data Description The path to the OSD's data. You must create the directory when deploying Ceph. Mount a drive for OSD data at this mount point. Type String Default /var/lib/ceph/osd/USDcluster-USDid osd_max_write_size Description The maximum size of a write in megabytes. Type 32-bit Integer Default 90 osd_client_message_size_cap Description The largest client data message allowed in memory. Type 64-bit Integer Unsigned Default 500MB default. 500*1024L*1024L osd_class_dir Description The class path for RADOS class plug-ins. Type String Default USDlibdir/rados-classes osd_max_scrubs Description The maximum number of simultaneous scrub operations for a Ceph OSD. Type 32-bit Int Default 1 osd_scrub_thread_timeout Description The maximum time in seconds before timing out a scrub thread. Type 32-bit Integer Default 60 osd_scrub_finalize_thread_timeout Description The maximum time in seconds before timing out a scrub finalize thread. Type 32-bit Integer Default 60*10 osd_scrub_begin_hour Description This restricts scrubbing to this hour of the day or later. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing the entire day. Along with osd_scrub_end_hour , they define a time window, in which the scrubs can happen. But a scrub is performed no matter whether the time window allows or not, as long as the placement group's scrub interval exceeds osd_scrub_max_interval . Type Integer Default 0 Allowed range [0,23] osd_scrub_end_hour Description This restricts scrubbing to the hour earlier than this. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing for the entire day. Along with osd_scrub_begin_hour , they define a time window, in which the scrubs can happen. But a scrub is performed no matter whether the time window allows or not, as long as the placement group's scrub interval exceeds osd_scrub_max_interval . Type Integer Default 0 Allowed range [0,23] osd_scrub_load_threshold Description The maximum load. Ceph will not scrub when the system load (as defined by the getloadavg() function) is higher than this number. Default is 0.5 . Type Float Default 0.5 osd_scrub_min_interval Description The minimum interval in seconds for scrubbing the Ceph OSD when the Red Hat Ceph Storage cluster load is low. Type Float Default Once per day. 60*60*24 osd_scrub_max_interval Description The maximum interval in seconds for scrubbing the Ceph OSD irrespective of cluster load. Type Float Default Once per week. 7*60*60*24 osd_scrub_interval_randomize_ratio Description Takes the ratio and randomizes the scheduled scrub between osd scrub min interval and osd scrub max interval . Type Float Default 0.5 . mon_warn_not_scrubbed Description Number of seconds after osd_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning). osd_scrub_chunk_min Description The object store is partitioned into chunks which end on hash boundaries. For chunky scrubs, Ceph scrubs objects one chunk at a time with writes blocked for that chunk. The osd scrub chunk min setting represents the minimum number of chunks to scrub. Type 32-bit Integer Default 5 osd_scrub_chunk_max Description The maximum number of chunks to scrub. Type 32-bit Integer Default 25 osd_scrub_sleep Description The time to sleep between deep scrub operations. Type Float Default 0 (or off). osd_scrub_during_recovery Description Allows scrubbing during recovery. Type Bool Default false osd_scrub_invalid_stats Description Forces extra scrub to fix stats marked as invalid. Type Bool Default true osd_scrub_priority Description Controls queue priority of scrub operations versus client I/O. Type Unsigned 32-bit Integer Default 5 osd_requested_scrub_priority Description The priority set for user requested scrub on the work queue. If this value were to be smaller than osd_client_op_priority , it can be boosted to the value of osd_client_op_priority when scrub is blocking client operations. Type Unsigned 32-bit Integer Default 120 osd_scrub_cost Description Cost of scrub operations in megabytes for queue scheduling purposes. Type Unsigned 32-bit Integer Default 52428800 osd_deep_scrub_interval Description The interval for deep scrubbing, that is fully reading all data. The osd scrub load threshold parameter does not affect this setting. Type Float Default Once per week. 60*60*24*7 osd_deep_scrub_stride Description Read size when doing a deep scrub. Type 32-bit Integer Default 512 KB. 524288 mon_warn_not_deep_scrubbed Description Number of seconds after osd_deep_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning) osd_deep_scrub_randomize_ratio Description The rate at which scrubs will randomly become deep scrubs (even before osd_deep_scrub_interval has passed). Type Float Default 0.15 or 15% osd_deep_scrub_update_digest_min_age Description How many seconds old objects must be before scrub updates the whole-object digest. Type Integer Default 7200 (120 hours) osd_deep_scrub_large_omap_object_key_threshold Description Warning when you encounter an object with more OMAP keys than this. Type Integer Default 200000 osd_deep_scrub_large_omap_object_value_sum_threshold Description Warning when you encounter an object with more OMAP key bytes than this. Type Integer Default 1 G osd_delete_sleep Description Time in seconds to sleep before the removal transaction. This throttles the placement group deletion process. Type Float Default 0.0 osd_delete_sleep_hdd Description Time in seconds to sleep before the removal transaction for HDDs. Type Float Default 5.0 osd_delete_sleep_ssd Description Time in seconds to sleep before the removal transaction for SSDs. Type Float Default 1.0 osd_delete_sleep_hybrid Description Time in seconds to sleep before the removal transaction when Ceph OSD data is on HDD and OSD journal or WAL and DB is on SSD. Type Float Default 1.0 osd_op_num_shards Description The number of shards for client operations. Type 32-bit Integer Default 0 osd_op_num_threads_per_shard Description The number of threads per shard for client operations. Type 32-bit Integer Default 0 osd_op_num_shards_hdd Description The number of shards for HDD operations. Type 32-bit Integer Default 5 osd_op_num_threads_per_shard_hdd Description The number of threads per shard for HDD operations. Type 32-bit Integer Default 1 osd_op_num_shards_ssd Description The number of shards for SSD operations. Type 32-bit Integer Default 8 osd_op_num_threads_per_shard_ssd Description The number of threads per shard for SSD operations. Type 32-bit Integer Default 2 osd_op_queue Description Sets the type of queue to be used for operation prioritizing within Ceph OSDs. Requires a restart of the OSD daemons. Type String Default wpq Valid choices wpq , mclock_scheduler , debug_random Important The mClock OSD scheduler is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. osd_op_queue_cut_off Description Selects which priority operations are sent to the strict queue and which are sent to the normal queue. Requires a restart of the OSD daemons. The low setting sends all replication and higher operations to the strict queue, while the high option sends only replication acknowledgment operations and higher to the strict queue. The high setting helps when some Ceph OSDs in the cluster are very busy, especially when combined with the wpq option in the osd_op_queue setting. Ceph OSDs that are very busy handling replication traffic can deplete primary client traffic on these OSDs without these settings. Type String Default high Valid choices low , high , debug_random osd_client_op_priority Description The priority set for client operations. It is relative to osd recovery op priority . Type 32-bit Integer Default 63 Valid Range 1-63 osd_recovery_op_priority Description The priority set for recovery operations. It is relative to osd client op priority . Type 32-bit Integer Default 3 Valid Range 1-63 osd_op_thread_timeout Description The Ceph OSD operation thread timeout in seconds. Type 32-bit Integer Default 15 osd_op_complaint_time Description An operation becomes complaint worthy after the specified number of seconds have elapsed. Type Float Default 30 osd_disk_threads Description The number of disk threads, which are used to perform background disk intensive OSD operations such as scrubbing and snap trimming. Type 32-bit Integer Default 1 osd_op_history_size Description The maximum number of completed operations to track. Type 32-bit Unsigned Integer Default 20 osd_op_history_duration Description The oldest completed operation to track. Type 32-bit Unsigned Integer Default 600 osd_op_log_threshold Description How many operations logs to display at once. Type 32-bit Integer Default 5 osd_op_timeout Description The time in seconds after which running OSD operations time out. Type Integer Default 0 Important Do not set the osd op timeout option unless your clients can handle the consequences. For example, setting this parameter on clients running in virtual machines can lead to data corruption because the virtual machines interpret this timeout as a hardware failure. osd_max_backfills Description The maximum number of backfill operations allowed to or from a single OSD. Type 64-bit Unsigned Integer Default 1 osd_backfill_scan_min Description The minimum number of objects per backfill scan. Type 32-bit Integer Default 64 osd_backfill_scan_max Description The maximum number of objects per backfill scan. Type 32-bit Integer Default 512 osd_backfill_full_ratio Description Refuse to accept backfill requests when the Ceph OSD's full ratio is above this value. Type Float Default 0.85 osd_backfill_retry_interval Description The number of seconds to wait before retrying backfill requests. Type Double Default 30.000000 osd_map_dedup Description Enable removing duplicates in the OSD map. Type Boolean Default true osd_map_cache_size Description The size of the OSD map cache in megabytes. Type 32-bit Integer Default 50 osd_map_cache_bl_size Description The size of the in-memory OSD map cache in OSD daemons. Type 32-bit Integer Default 50 osd_map_cache_bl_inc_size Description The size of the in-memory OSD map cache incrementals in OSD daemons. Type 32-bit Integer Default 100 osd_map_message_max Description The maximum map entries allowed per MOSDMap message. Type 32-bit Integer Default 40 osd_snap_trim_thread_timeout Description The maximum time in seconds before timing out a snap trim thread. Type 32-bit Integer Default 60*60*1 osd_pg_max_concurrent_snap_trims Description The max number of parallel snap trims/PG. This controls how many objects per PG to trim at once. Type 32-bit Integer Default 2 osd_snap_trim_sleep Description Insert a sleep between every trim operation a PG issues. Type 32-bit Integer Default 0 osd_snap_trim_sleep_hdd Description Time in seconds to sleep before the snapshot trimming for HDDs. Type Float Default 5.0 osd_snap_trim_sleep_ssd Description Time in seconds to sleep before the snapshot trimming operation for SSD OSDs, including NVMe. Type Float Default 0.0 osd_snap_trim_sleep_hybrid Description Time in seconds to sleep before the snapshot trimming operation when OSD data is on an HDD and the OSD journal or WAL and DB is on an SSD. Type Float Default 2.0 osd_max_trimming_pgs Description The max number of trimming PGs Type 32-bit Integer Default 2 osd_backlog_thread_timeout Description The maximum time in seconds before timing out a backlog thread. Type 32-bit Integer Default 60*60*1 osd_default_notify_timeout Description The OSD default notification timeout (in seconds). Type 32-bit Integer Unsigned Default 30 osd_check_for_log_corruption Description Check log files for corruption. Can be computationally expensive. Type Boolean Default false osd_remove_thread_timeout Description The maximum time in seconds before timing out a remove OSD thread. Type 32-bit Integer Default 60*60 osd_command_thread_timeout Description The maximum time in seconds before timing out a command thread. Type 32-bit Integer Default 10*60 osd_command_max_records Description Limits the number of lost objects to return. Type 32-bit Integer Default 256 osd_auto_upgrade_tmap Description Uses tmap for omap on old objects. Type Boolean Default true osd_tmapput_sets_users_tmap Description Uses tmap for debugging only. Type Boolean Default false osd_preserve_trimmed_log Description Preserves trimmed log files, but uses more disk space. Type Boolean Default false osd_recovery_delay_start Description After peering completes, Ceph delays for the specified number of seconds before starting to recover objects. Type Float Default 0 osd_recovery_max_active Description The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests place an increased load on the cluster. Type 32-bit Integer Default 0 osd_recovery_max_active_hdd Description The number of active recovery requests per Ceph OSD at one time, if the primary device is HDD. Type Integer Default 3 osd_recovery_max_active_ssd Description The number of active recovery requests per Ceph OSD at one time, if the primary device is SSD. Type Integer Default 10 osd_recovery_sleep Description Time in seconds to sleep before the recovery or backfill operation. Increasing this value slows down recovery operation while client operations are less impacted. Type Float Default 0.0 osd_recovery_sleep_hdd Description Time in seconds to sleep before the recovery or backfill operation for HDDs. Type Float Default 0.1 osd_recovery_sleep_ssd Description Time in seconds to sleep before the recovery or backfill operation for SSDs. Type Float Default 0.0 osd_recovery_sleep_hybrid Description Time in seconds to sleep before the recovery or backfill operation when Ceph OSD data is on HDD and OSD journal or WAL and DB is on SSD. Type Float Default 0.025 osd_recovery_max_chunk Description The maximum size of a recovered chunk of data to push. Type 64-bit Integer Unsigned Default 8388608 osd_recovery_threads Description The number of threads for recovering data. Type 32-bit Integer Default 1 osd_recovery_thread_timeout Description The maximum time in seconds before timing out a recovery thread. Type 32-bit Integer Default 30 osd_recover_clone_overlap Description Preserves clone overlap during recovery. Should always be set to true . Type Boolean Default true rados_osd_op_timeout Description Number of seconds that RADOS waits for a response from the OSD before returning an error from a RADOS operation. A value of 0 means no limit. Type Double Default 0 | [
"IMPORTANT: Red Hat does not recommend changing the default."
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/configuration_guide/osd-object-storage-daemon-configuration-options_conf |
Preface | Preface Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security, and bug fix errata. The Red Hat Enterprise Linux 6.9 Technical Notes document provides a list of notable bug fixes, all currently available Technology Previews, deprecated functionality, and other information. The Release Notes document describes the major changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications for this minor release, as well as known problems. Capabilities and limits of Red Hat Enterprise Linux 6 as compared to other versions of the system are available in the Red Hat Knowledgebase article available at https://access.redhat.com/articles/rhel-limits . For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/pref-red_hat_enterprise_linux-6.9_technical_notes-preface |
Chapter 14. FIPS 140-2 support | Chapter 14. FIPS 140-2 support The Federal Information Processing Standard Publication 140-2, (FIPS 140-2), is a U.S. government computer security standard used to approve cryptographic modules. Red Hat build of Keycloak supports running in FIPS 140-2 compliant mode. In this case, Red Hat build of Keycloak will use only FIPS approved cryptography algorithms for its functionality. To run in FIPS 140-2, Red Hat build of Keycloak should run on a FIPS 140-2 enabled system. This requirement usually assumes RHEL or Fedora where FIPS was enabled during installation. See RHEL documentation for the details. When the system is in FIPS mode, it makes sure that the underlying OpenJDK is in FIPS mode as well and would use only FIPS enabled security providers . To check that the system is in FIPS mode, you can check it with the following command from the command line: fips-mode-setup --check If the system is not in FIPS mode, you can enable it with the following command, however it is recommended that system is in FIPS mode since the installation rather than subsequently enabling it as follows: fips-mode-setup --enable 14.1. BouncyCastle library Red Hat build of Keycloak internally uses the BouncyCastle library for many cryptography utilities. Please note that the default version of the BouncyCastle library that shipped with Red Hat build of Keycloak is not FIPS compliant; however, BouncyCastle also provides a FIPS validated version of its library. The FIPS validated BouncyCastle library cannot be shipped with Red Hat build of Keycloak due to license constraints and Red Hat build of Keycloak cannot provide official support of it. Therefore, to run in FIPS compliant mode, you need to download BouncyCastle-FIPS bits and add them to the Red Hat build of Keycloak distribution. When Red Hat build of Keycloak executes in fips mode, it will use the BCFIPS bits instead of the default BouncyCastle bits, which achieves FIPS compliance. 14.1.1. BouncyCastle FIPS bits BouncyCastle FIPS can be downloaded from the BouncyCastle official page . Then you can add them to the directory KEYCLOAK_HOME/providers of your distribution. Make sure to use proper versions compatible with BouncyCastle Red Hat build of Keycloak dependencies. The supported BCFIPS bits needed are: bc-fips-1.0.2.3.jar bctls-fips-1.0.18.jar bcpkix-fips-1.0.7.jar 14.2. Generating keystore You can create either pkcs12 or bcfks keystore to be used for the Red Hat build of Keycloak server SSL. 14.2.1. PKCS12 keystore The p12 (or pkcs12 ) keystore (and/or truststore) works well in BCFIPS non-approved mode. PKCS12 keystore can be generated with OpenJDK 17 Java on RHEL 9 in the standard way. For instance, the following command can be used to generate the keystore: keytool -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword \ -keystore USDKEYCLOAK_HOME/conf/server.keystore \ -alias localhost \ -dname CN=localhost -keypass passwordpassword When the system is in FIPS mode, the default java.security file is changed in order to use FIPS enabled security providers, so no additional configuration is needed. Additionally, in the PKCS12 keystore, you can store PBE (password-based encryption) keys simply by using the keytool command, which makes it ideal for using it with Red Hat build of Keycloak KeyStore Vault and/or to store configuration properties in the KeyStore Config Source. For more details, see the Configuring Red Hat build of Keycloak and the Using a vault . 14.2.2. BCFKS keystore BCFKS keystore generation requires the use of the BouncyCastle FIPS libraries and a custom security file. You can start by creating a helper file, such as /tmp/kc.keystore-create.java.security . The content of the file needs only to have the following property: , enter a command such as the following to generate the keystore: keytool -keystore USDKEYCLOAK_HOME/conf/server.keystore \ -storetype bcfks \ -providername BCFIPS \ -providerclass org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider \ -provider org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider \ -providerpath USDKEYCLOAK_HOME/providers/bc-fips-*.jar \ -alias localhost \ -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword \ -dname CN=localhost -keypass passwordpassword \ -J-Djava.security.properties=/tmp/kc.keystore-create.java.security Warning Using self-signed certificates is for demonstration purposes only, so replace these certificates with proper certificates when you move to a production environment. Similar options are needed when you are doing any other manipulation with keystore/truststore of bcfks type. 14.3. Running the server. To run the server with BCFIPS in non-approved mode, enter the following command bin/kc.[sh|bat] start --features=fips --hostname=localhost --https-key-store-password=passwordpassword --log-level=INFO,org.keycloak.common.crypto:TRACE,org.keycloak.crypto:TRACE Note In non-approved mode, the default keystore type (as well as default truststore type) is PKCS12. Hence if you generated a BCFKS keystore as described above, it is also required to use the command --https-key-store-type=bcfks . A similar command might be needed for the truststore as well if you want to use it. Note You can disable logging in production if everything works as expected. 14.4. Strict mode There is the fips-mode option, which is automatically set to non-strict when the fips feature is enabled. This means to run BCFIPS in the "non-approved mode". The more secure alternative is to use --features=fips --fips-mode=strict in which case BouncyCastle FIPS will use "approved mode". Using that option results in stricter security requirements on cryptography and security algorithms. Note In strict mode, the default keystore type (as well as default truststore type) is BCFKS. If you want to use a different keystore type it is required to use the option --https-key-store-type with appropriate type. A similar command might be needed for the truststore as well if you want to use it. When starting the server, you can check that the startup log contains KC provider with the note about Approved Mode such as the following: 14.4.1. Cryptography restrictions in strict mode As mentioned in the section, strict mode may not work with pkcs12 keystore. It is required to use another keystore (like bcfks ) as mentioned earlier. Also jks and pkcs12 keystores are not supported in Red Hat build of Keycloak when using strict mode. Some examples are importing or generating a keystore of an OIDC or SAML client in the Admin Console or for a java-keystore provider in the realm keys. User passwords must be 14 characters or longer. Red Hat build of Keycloak uses PBKDF2 based password encoding by default. BCFIPS approved mode requires passwords to be at least 112 bits (effectively 14 characters) with PBKDF2 algorithm. If you want to allow a shorter password, set the property max-padding-length of provider pbkdf2-sha256 of SPI password-hashing to value 14 to provide additional padding when verifying a hash created by this algorithm. This setting is also backwards compatible with previously stored passwords. For example, if the user's database is in a non-FIPS environment and you have shorter passwords and you want to verify them now with Red Hat build of Keycloak using BCFIPS in approved mode, the passwords should work. So effectively, you can use an option such as the following when starting the server: Note Using the option above does not break FIPS compliance. However, note that longer passwords are good practice anyway. For example, passwords auto-generated by modern browsers match this requirement as they are longer than 14 characters. RSA keys of 1024 bits do not work (2048 is the minimum). This applies for keys used by the Red Hat build of Keycloak realm itself (Realm keys from the Keys tab in the admin console), but also client keys and IDP keys HMAC SHA-XXX keys must be at least 112 bits (or 14 characters long). For example if you use OIDC clients with the client authentication Signed Jwt with Client Secret (or client-secret-jwt in the OIDC notation), then your client secrets should be at least 14 characters long. Note that for good security, it is recommended to use client secrets generated by the Red Hat build of Keycloak server, which always fulfils this requirement. 14.5. Other restrictions To have SAML working, make sure that a XMLDSig security provider is available in your security providers. To have Kerberos working, make sure that a SunJGSS security provider is available. In FIPS enabled RHEL 9 in OpenJDK 17.0.6, these security providers are not present in the java.security , which means that they effectively cannot work. To have SAML working, you can manually add the provider into JAVA_HOME/conf/security/java.security into the list fips providers. For example, add the line such as the following: Adding this security provider should work well. In fact, it is FIPS compliant and likely will be added by default in the future OpenJDK 17 micro version. Details are in the bugzilla . Note It is recommended to look at JAVA_HOME/conf/security/java.security and check all configured providers here and make sure that the number matches. In other words, fips.provider.7 assumes that there are already 6 providers configured with prefix like fips.provider.N in this file. If you prefer not to edit your java.security file inside java itself, you can create a custom java security file (for example named kc.java.security ) and add only the single property above for adding XMLDSig provider into that file. Then start your Red Hat build of Keycloak server with this property file attached: For Kerberos/SPNEGO, the security provider SunJGSS is not yet fully FIPS compliant. Hence it is not recommended to add it to your list of security providers if you want to be FIPS compliant. The KERBEROS feature is disabled by default in Red Hat build of Keycloak when it is executed on FIPS platform and when security provider is not available. Details are in the bugzilla . 14.6. Run the CLI on the FIPS host If you want to run Client Registration CLI ( kcreg.sh|bat script) or Admin CLI ( kcadm.sh|bat script), the CLI must also use the BouncyCastle FIPS dependencies instead of plain BouncyCastle dependencies. To achieve this, you may copy the jars to the CLI library folder and that is enough. CLI tool will automatically use BCFIPS dependencies instead of plain BC when it detects that corresponding BCFIPS jars are present (see above for the versions used). For example, use command such as the following before running the CLI: Note When trying to use BCFKS truststore/keystore with CLI, you may see issues due this truststore is not the default java keystore type. It can be good to specify it as default in java security properties. For example run this command on unix based systems before doing any operation with kcadm|kcreg clients: 14.7. Red Hat build of Keycloak server in FIPS mode in containers When you want Red Hat build of Keycloak in FIPS mode to be executed inside a container, your "host" must be using FIPS mode as well. The container will then "inherit" FIPS mode from the parent host. See this section in the RHEL documentation for the details. The Red Hat build of Keycloak container image will automatically be in fips mode when executed from the host in FIPS mode. However, make sure that the Red Hat build of Keycloak container also uses BCFIPS jars (instead of BC jars) and proper options when started. Regarding this, it is best to build your own container image as described in the Running Red Hat build of Keycloak in a container and tweak it to use BCFIPS etc. For example in the current directory, you can create sub-directory files and add: BC FIPS jar files as described above Custom keystore file - named for example keycloak-fips.keystore.bcfks Security file kc.java.security with added provider for SAML Then create Dockerfile in the current directory similar to this: Dockerfile: FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder ADD files /tmp/files/ WORKDIR /opt/keycloak RUN cp /tmp/files/*.jar /opt/keycloak/providers/ RUN cp /tmp/files/keycloak-fips.keystore.* /opt/keycloak/conf/server.keystore RUN cp /tmp/files/kc.java.security /opt/keycloak/conf/ RUN /opt/keycloak/bin/kc.sh build --features=fips --fips-mode=strict FROM registry.redhat.io/rhbk/keycloak-rhel9:24 COPY --from=builder /opt/keycloak/ /opt/keycloak/ ENTRYPOINT ["/opt/keycloak/bin/kc.sh"] Then build FIPS as an optimized Docker image and start it as described in the Running Red Hat build of Keycloak in a container . These steps require that you use arguments as described above when starting the image. 14.8. Migration from non-fips environment If you previously used Red Hat build of Keycloak in a non-fips environment, it is possible to migrate it to a FIPS environment including its data. However, restrictions and considerations exist as mentioned in sections, namely: Make sure all the Red Hat build of Keycloak functionality relying on keystores uses only supported keystore types. This differs based on whether strict or non-strict mode is used. Kerberos authentication may not work. If your authentication flow uses Kerberos authenticator, this authenticator will be automatically switched to DISABLED when migrated to FIPS environment. It is recommended to remove any Kerberos user storage providers from your realm and disable Kerberos related functionality in LDAP providers before switching to FIPS environment. In addition to the preceding requirements, be sure to doublecheck this before switching to FIPS strict mode: Make sure that all the Red Hat build of Keycloak functionality relying on keys (for example, realm or client keys) use RSA keys of at least 2048 bits Make sure that clients relying on Signed JWT with Client Secret use at least 14 characters long secrets (ideally generated secrets) Password length restriction as described earlier. In case your users have shorter passwords, be sure to start the server with the max padding length set to 14 of PBKDF2 provider as mentioned earlier. If you prefer to avoid this option, you can for instance ask all your users to reset their password (for example by the Forgot password link) during the first authentication in the new environment. 14.9. Red Hat build of Keycloak FIPS mode on the non-fips system Red Hat build of Keycloak is supported and tested on a FIPS enabled RHEL 8 system and ubi8 image. It is supported with RHEL 9 (and ubi9 image) as well. Running on the non-RHEL compatible platform or on the non-FIPS enabled platform, the FIPS compliance cannot be strictly guaranteed and cannot be officially supported. If you are still restricted to running Red Hat build of Keycloak on such a system, you can at least update your security providers configured in java.security file. This update does not amount to FIPS compliance, but at least the setup is closer to it. It can be done by providing a custom security file with only an overriden list of security providers as described earlier. For a list of recommended providers, see the OpenJDK 17 documentation . You can check the Red Hat build of Keycloak server log at startup to see if the correct security providers are used. TRACE logging should be enabled for crypto-related Red Hat build of Keycloak packages as described in the Keycloak startup command earlier. | [
"fips-mode-setup --check",
"fips-mode-setup --enable",
"keytool -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -keystore USDKEYCLOAK_HOME/conf/server.keystore -alias localhost -dname CN=localhost -keypass passwordpassword",
"securerandom.strongAlgorithms=PKCS11:SunPKCS11-NSS-FIPS",
"keytool -keystore USDKEYCLOAK_HOME/conf/server.keystore -storetype bcfks -providername BCFIPS -providerclass org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -provider org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -providerpath USDKEYCLOAK_HOME/providers/bc-fips-*.jar -alias localhost -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -dname CN=localhost -keypass passwordpassword -J-Djava.security.properties=/tmp/kc.keystore-create.java.security",
"bin/kc.[sh|bat] start --features=fips --hostname=localhost --https-key-store-password=passwordpassword --log-level=INFO,org.keycloak.common.crypto:TRACE,org.keycloak.crypto:TRACE",
"KC(BCFIPS version 1.000203 Approved Mode, FIPS-JVM: enabled) version 1.0 - class org.keycloak.crypto.fips.KeycloakFipsSecurityProvider,",
"--spi-password-hashing-pbkdf2-sha256-max-padding-length=14",
"fips.provider.7=XMLDSig",
"-Djava.security.properties=/location/to/your/file/kc.java.security",
"cp USDKEYCLOAK_HOME/providers/bc-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/ cp USDKEYCLOAK_HOME/providers/bctls-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/",
"echo \"keystore.type=bcfks fips.keystore.type=bcfks\" > /tmp/kcadm.java.security export KC_OPTS=\"-Djava.security.properties=/tmp/kcadm.java.security\"",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder ADD files /tmp/files/ WORKDIR /opt/keycloak RUN cp /tmp/files/*.jar /opt/keycloak/providers/ RUN cp /tmp/files/keycloak-fips.keystore.* /opt/keycloak/conf/server.keystore RUN cp /tmp/files/kc.java.security /opt/keycloak/conf/ RUN /opt/keycloak/bin/kc.sh build --features=fips --fips-mode=strict FROM registry.redhat.io/rhbk/keycloak-rhel9:24 COPY --from=builder /opt/keycloak/ /opt/keycloak/ ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/fips- |
Chapter 2. Updating the undercloud | Chapter 2. Updating the undercloud You can use director to update the main packages on the undercloud node. To update the undercloud and its overcloud images to the latest Red Hat OpenStack Platform (RHOSP) 17.0 version, complete the following procedures: Section 2.1, "Performing a minor update of a containerized undercloud" Section 2.2, "Updating the overcloud images" Prerequisites Before you can update the undercloud to the latest RHOSP 17.0 version, ensure that you complete all the update preparation procedures. For more information, see Chapter 1, Preparing for a minor update 2.1. Performing a minor update of a containerized undercloud Director provides commands to update the main packages on the undercloud node. Use director to perform a minor update within the current version of your RHOSP environment. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Update the director main packages with the dnf update command: USD sudo dnf update -y python3-tripleoclient ansible-* Update the undercloud environment: Wait until the undercloud update process completes. Reboot the undercloud to update the operating system's kernel and other system packages: Wait until the node boots. 2.2. Updating the overcloud images You must replace your current overcloud images with new versions to ensure that director can introspect and provision your nodes with the latest version of the RHOSP software. Prerequisites You have updated the undercloud node to the latest version. For more information, see Section 2.1, "Performing a minor update of a containerized undercloud" . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Remove any existing images from the images directory on the stack user's home ( /home/stack/images ): Extract the archives: Import the latest images into the director: USD openstack overcloud image upload --update-existing --image-path /home/stack/images/ Configure your nodes to use the new images: USD openstack overcloud node configure USD(openstack baremetal node list -c UUID -f value) Verify the existence of the new images: USD ls -l /var/lib/ironic/httpboot /var/lib/ironic/images Important When you deploy overcloud nodes, ensure that the overcloud image version corresponds to the respective heat template version. For example, use only the RHOSP 17.0 images with the RHOSP 17.0 heat templates. If you deployed a connected environment that uses the Red Hat Customer Portal or Red Hat Satellite Server, the overcloud image and package repository versions might be out of sync. To ensure that the overcloud image and package repository versions match, you can use the virt-customize tool. For more information, see the Red Hat Knowledgebase solution Modifying the Red Hat Linux OpenStack Platform Overcloud Image with virt-customize . The new overcloud-full image replaces the old overcloud-full image. If you made changes to the old image, you must repeat the changes in the new image, especially if you want to deploy new nodes in the future. | [
"source ~/stackrc",
"sudo dnf update -y python3-tripleoclient ansible-*",
"openstack undercloud upgrade",
"sudo reboot",
"source ~/stackrc",
"rm -rf ~/images/*",
"cd ~/images for i in /usr/share/rhosp-director-images/ironic-python-agent-latest-17.0.tar /usr/share/rhosp-director-images/overcloud-hardened-uefi-full-latest-17.0.tar; do tar -xvf USDi; done cd ~",
"openstack overcloud image upload --update-existing --image-path /home/stack/images/",
"openstack overcloud node configure USD(openstack baremetal node list -c UUID -f value)",
"ls -l /var/lib/ironic/httpboot /var/lib/ironic/images"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/keeping_red_hat_openstack_platform_updated/assembly_updating-the-undercloud_keeping-updated |
function::task_backtrace | function::task_backtrace Name function::task_backtrace - Hex backtrace of an arbitrary task Synopsis Arguments task pointer to task_struct Description This function returns a string of hex addresses that are a backtrace of the stack of a particular task Output may be truncated as per maximum string length. Deprecated in SystemTap 1.6. | [
"task_backtrace:string(task:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-backtrace |
Chapter 25. datastore | Chapter 25. datastore This chapter describes the commands under the datastore command. 25.1. datastore list List available datastores Usage: Table 25.1. Command arguments Value Summary -h, --help Show this help message and exit Table 25.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 25.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 25.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 25.2. datastore show Shows details of a datastore Usage: Table 25.6. Positional arguments Value Summary <datastore> Id of the datastore Table 25.7. Command arguments Value Summary -h, --help Show this help message and exit Table 25.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 25.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 25.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 25.3. datastore version list Lists available versions for a datastore Usage: Table 25.12. Positional arguments Value Summary <datastore> Id or name of the datastore Table 25.13. Command arguments Value Summary -h, --help Show this help message and exit Table 25.14. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 25.15. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 25.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.17. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 25.4. datastore version show Shows details of a datastore version. Usage: Table 25.18. Positional arguments Value Summary <datastore_version> Id or name of the datastore version. Table 25.19. Command arguments Value Summary -h, --help Show this help message and exit --datastore <datastore> Id or name of the datastore. optional if the id ofthe datastore_version is provided. Table 25.20. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 25.21. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.22. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 25.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack datastore list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]",
"openstack datastore show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <datastore>",
"openstack datastore version list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] <datastore>",
"openstack datastore version show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--datastore <datastore>] <datastore_version>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/datastore |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_the_cryostat_dashboard/making-open-source-more-inclusive |
Chapter 25. Storing Authentication Secrets with Vaults | Chapter 25. Storing Authentication Secrets with Vaults A vault is a secure location for storing, retrieving, sharing, and recovering secrets. A secret is security-sensitive data that should only be accessible by a limited group of people or entities. For example, secrets include: passwords PINs private SSH keys Users and services can access the secrets stored in a vault from any machine enrolled in the Identity Management (IdM) domain. Note Vault is only available from the command line, not from the IdM web UI. Use cases for vaults include: Storing personal secrets of a user See Section 25.4, "Storing a User's Personal Secret" for details. Storing a secret for a service See Section 25.5, "Storing a Service Secret in a Vault" for details. Storing a common secret used by multiple users See Section 25.6, "Storing a Common Secret for Multiple Users" for details. Note that to use vaults, you must meet the conditions described in Section 25.2, "Prerequisites for Using Vaults" . 25.1. How Vaults Work 25.1.1. Vault Owners, Members, and Administrators IdM distinguishes the following vault user types: Vault owner A vault owner is a user or service with basic management privileges on the vault. For example, a vault owner can modify the properties of the vault or add new vault members. Each vault must have at least one owner. A vault can also have multiple owners. Vault member A vault member is a user or service who can access a vault created by another user or service. Vault administrator Vault administrators have unrestricted access to all vaults and are allowed to perform all vault operations. Note Symmetric and asymmetric vaults are protected with a password or key and apply special access control rules (see Section 25.1.2, "Standard, Symmetric, and Asymmetric Vaults" ). The administrator must meet these rules to: access secrets in symmetric and asymmetric vaults change or reset the vault password or key A vault administrator is any user with the Vault Administrators privilege. See Section 10.4, "Defining Role-Based Access Controls" for information on defining user privileges. Certain owner and member privileges depend on the type of the vault. See Section 25.1.2, "Standard, Symmetric, and Asymmetric Vaults" for details. Vault User The output of some commands, such as the ipa vault-show command, also displays Vault user for user vaults: The vault user represents the user in whose container the vault is located. For details on vault containers and user vaults, see Section 25.1.4, "The Different Types of Vault Containers" and Section 25.1.3, "User, Service, and Shared Vaults" . 25.1.2. Standard, Symmetric, and Asymmetric Vaults The following vault types are based on the level of security and access control: Standard vault Vault owners and vault members can archive and retrieve the secrets without having to use a password or key. Symmetric vault Secrets in the vault are protected with a symmetric key. Vault members and vault owners can archive and retrieve the secrets, but they must provide the vault password. Asymmetric vault Secrets in the vault are protected with an asymmetric key. Users archive the secret using a public key and retrieve it using a private key. Vault members can only archive secrets, while vault owners can both archive and retrieve secrets. 25.1.3. User, Service, and Shared Vaults The following vault types are based on ownership: User vault: a private vault for a user Owner: a single user. Any user can own one or more user vaults. Service vault: a private vault for a service Owner: a single service. Any service can own one or more service vaults. Shared vault Owner: the vault administrator who created the vault. Other vault administrators also have full access to the vault. Shared vaults can be used by multiple users or services. 25.1.4. The Different Types of Vault Containers A vault container is a collection of vaults. IdM provides the following default vault containers: User container: a private container for a user This container stores: user vaults for a particular user. Service container: a private container for a service This container stores: service vaults for a particular service. Shared container This container stores: vaults that can be shared by multiple users or services. IdM creates user and service containers for each user or service automatically when the first private vault for the user or service is created. After the user or service is deleted, IdM removes the container and its contents. | [
"ipa vault-show my_vault Vault name: my_vault Type: standard Owner users: user Vault user: user"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/vault |
14.2.4. Adding a New Hard Disk Using LVM | 14.2.4. Adding a New Hard Disk Using LVM In this example, a new IDE hard disk was added. The figure below illustrates the details for the new hard disk. From the figure below, the disk is uninitialized and not mounted. To initialize a partition, click on the Initialize Entity button. For more details, see Section 14.2.1, "Utilizing Uninitialized Entities" . Once initialized, LVM will add the new volume to the list of unallocated volumes as illustrated in Example 14.4, "Create a new volume group" . Figure 14.13. Uninitialized hard disk | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-system-config-lvm-new-hdd |
16.6. Random Number Generator Device | 16.6. Random Number Generator Device Random number generators are very important for operating system security. For securing virtual operating systems, Red Hat Enterprise Linux 7 includes virtio-rng , a virtual hardware random number generator device that can provide the guest with fresh entropy on request. On the host physical machine, the hardware RNG interface creates a chardev at /dev/hwrng , which can be opened and then read to fetch entropy from the host physical machine. In co-operation with the rngd daemon, the entropy from the host physical machine can be routed to the guest virtual machine's /dev/random , which is the primary source of randomness. Using a random number generator is particularly useful when a device such as a keyboard, mouse, and other inputs are not enough to generate sufficient entropy on the guest virtual machine. The virtual random number generator device allows the host physical machine to pass through entropy to guest virtual machine operating systems. This procedure can be performed using either the command line or the virt-manager interface. For instructions, see below. For more information about virtio-rng , see Red Hat Enterprise Linux Virtual Machines: Access to Random Numbers Made Easy . Procedure 16.11. Implementing virtio-rng using the Virtual Machine Manager Shut down the guest virtual machine. Select the guest virtual machine and from the Edit menu, select Virtual Machine Details , to open the Details window for the specified guest virtual machine. Click the Add Hardware button. In the Add New Virtual Hardware window, select RNG to open the Random Number Generator window. Figure 16.20. Random Number Generator window Enter the intended parameters and click Finish when done. The parameters are explained in virtio-rng elements . Procedure 16.12. Implementing virtio-rng using command-line tools Shut down the guest virtual machine. Using the virsh edit domain-name command, open the XML file for the intended guest virtual machine. Edit the <devices> element to include the following: ... <devices> <rng model='virtio'> <rate period='2000' bytes='1234'/> <backend model='random'>/dev/random</backend> <!-- OR --> <backend model='egd' type='udp'> <source mode='bind' service='1234'/> <source mode='connect' host='1.2.3.4' service='1234'/> </backend> </rng> </devices> ... Figure 16.21. Random number generator device The random number generator device allows the following XML attributes and elements: virtio-rng elements <model> - The required model attribute specifies what type of RNG device is provided. <backend model> - The <backend> element specifies the source of entropy to be used for the guest. The source model is configured using the model attribute. Supported source models include 'random' and 'egd' . <backend model='random'> - This <backend> type expects a non-blocking character device as input. Examples of such devices are /dev/random and /dev/urandom . The file name is specified as contents of the <backend> element. When no file name is specified the hypervisor default is used. <backend model='egd'> - This back end connects to a source using the EGD protocol. The source is specified as a character device. See character device host physical machine interface for more information. | [
"<devices> <rng model='virtio'> <rate period='2000' bytes='1234'/> <backend model='random'>/dev/random</backend> <!-- OR --> <backend model='egd' type='udp'> <source mode='bind' service='1234'/> <source mode='connect' host='1.2.3.4' service='1234'/> </backend> </rng> </devices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_device_configuration-random_number_generator_device |
Chapter 148. KafkaNodePoolSpec schema reference | Chapter 148. KafkaNodePoolSpec schema reference Used in: KafkaNodePool Property Property type Description replicas integer The number of pods in the pool. storage EphemeralStorage , PersistentClaimStorage , JbodStorage Storage configuration (disk). Cannot be updated. roles string (one or more of [controller, broker]) array The roles that the nodes in this pool will have when KRaft mode is enabled. Supported values are 'broker' and 'controller'. This field is required. When KRaft mode is disabled, the only allowed value if broker . resources ResourceRequirements CPU and memory resources to reserve. jvmOptions JvmOptions JVM Options for pods. template KafkaNodePoolTemplate Template for pool resources. The template allows users to specify how the resources belonging to this pool are generated. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkanodepoolspec-reference |
Chapter 5. Time utility functions | Chapter 5. Time utility functions Utility functions to turn seconds since the epoch (as returned by the timestamp function gettimeofday_s()) into a human readable date/time strings. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/ctime-dot-stp |
Chapter 4. Networking | Chapter 4. Networking HAProxy HAProxy is a stand-alone, Layer 7, high-performance network load balancer for TCP and HTTP-based applications which can perform various types of scheduling based on the content of the HTTP requests. Red Hat Enterprise Linux 6.4 introduces the haproxy package as a Technology Preview. Mellanox SR-IOV Support Single Root I/O Virtualization (SR-IOV) is now supported as a Technology Preview in the Mellanox libmlx4 library and the following drivers: mlx_core mlx4_ib (InfiniBand protocol) mlx_en (Ethernet protocol) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_release_notes/networking |
Chapter 7. Installing a cluster on IBM Cloud into an existing VPC | Chapter 7. Installing a cluster on IBM Cloud into an existing VPC In OpenShift Container Platform version 4.16, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 7.2. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 7.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to the existing VPC. As part of the installation, specify the following in the install-config.yaml file: The name of the existing resource group that contains the VPC and subnets ( networkResourceGroupName ) The name of the existing VPC ( vpcName ) The subnets that were created for control plane machines and compute machines ( controlPlaneSubnets and computeSubnets ) Note Additional installer-provisioned cluster resources are deployed to a separate resource group ( resourceGroupName ). You can specify this resource group before installing the cluster. If undefined, a new resource group is created for the cluster. To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 7.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 7.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud(R). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 7.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 7.7.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 7.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 mx2-8x64 mx2d-4x32 mx3d-2x20 ox2-4x32 ox2-8x64 ux2d-2x56 vx2d-4x56 Additional resources Optimizing storage 7.7.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 11 resourceGroupName: eu-gb-example-cluster-rg 12 networkResourceGroupName: eu-gb-example-existing-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 1 8 11 17 Required. The installation program prompts you for this value. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 13 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 14 Specify the name of an existing VPC. 15 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 16 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.7.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 7.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.13. steps Customize your cluster . Optional: Opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 11 resourceGroupName: eu-gb-example-cluster-rg 12 networkResourceGroupName: eu-gb-example-existing-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_cloud/installing-ibm-cloud-vpc |
Chapter 12. Azure ServiceBus | Chapter 12. Azure ServiceBus Since Camel 3.12 Both producer and consumer are supported The azure-servicebus component that integrates Azure ServiceBus . Azure ServiceBus is a fully managed enterprise integration message broker. Service Bus can decouple applications and services. Service Bus offers a reliable and secure platform for asynchronous transfer of data and state. Data is transferred between different applications and services using messages. Prerequisites You must have a valid Windows Azure Storage account. More information is available at Azure Documentation Portal . 12.1. Dependencies When using azure-servicebus with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-azure-servicebus-starter</artifactId> </dependency> 12.2. Configuring Options Camel components are configured on two levels: Component level Endpoint level 12.2.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 12.2.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 12.3. Component Options The Azure ServiceBus component supports 25 options, which are listed below. Name Description Default Type amqpRetryOptions (common) Sets the retry options for Service Bus clients. If not specified, the default retry options are used. AmqpRetryOptions amqpTransportType (common) Sets the transport type by which all the communication with Azure Service Bus occurs. Default value is AmqpTransportType#AMQP. Enum values: Amqp AmqpWebSockets AMQP AmqpTransportType clientOptions (common) Sets the ClientOptions to be sent from the client built from this builder, enabling customization of certain properties, as well as support the addition of custom header information. Refer to the ClientOptions documentation for more information. ClientOptions configuration (common) The component configurations. ServiceBusConfiguration proxyOptions (common) Sets the proxy configuration to use for ServiceBusSenderAsyncClient. When a proxy is configured, AmqpTransportType#AMQP_WEB_SOCKETS must be used for the transport type. ProxyOptions serviceBusType (common) Required The service bus type of connection to execute. Queue is for typical queue option and topic for subscription based model. Enum values: queue topic queue ServiceBusType bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerOperation (consumer) Sets the desired operation to be used in the consumer. Enum values: receiveMessages peekMessages receiveMessages ServiceBusConsumerOperationDefinition disableAutoComplete (consumer) Disables auto-complete and auto-abandon of received messages. By default, a successfully processed message is \\{link ServiceBusReceiverAsyncClient#complete(ServiceBusReceivedMessage) completed}. If an error happens when the message is processed, it is \\{link ServiceBusReceiverAsyncClient#abandon(ServiceBusReceivedMessage) abandoned}. false boolean maxAutoLockRenewDuration (consumer) Sets the amount of time to continue auto-renewing the lock. Setting Duration#ZERO or null disables auto-renewal. For \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} mode, auto-renewal is disabled. 5m Duration peekNumMaxMessages (consumer) Set the max number of messages to be peeked during the peek operation. Integer prefetchCount (consumer) Sets the prefetch count of the receiver. For both \\{link ServiceBusReceiveMode#PEEK_LOCK PEEK_LOCK} and \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} modes the default value is 1. Prefetch speeds up the message flow by aiming to have a message readily available for local retrieval when and before the application asks for one using ServiceBusReceiverAsyncClient#receiveMessages(). Setting a non-zero value will prefetch that number of messages. Setting the value to zero turns prefetch off. int receiverAsyncClient (consumer) Autowired Sets the receiverAsyncClient in order to consume messages by the consumer. ServiceBusReceiverAsyncClient serviceBusReceiveMode (consumer) Sets the receive mode for the receiver. Enum values: PEEK_LOCK RECEIVE_AND_DELETE PEEK_LOCK ServiceBusReceiveMode subQueue (consumer) Sets the type of the SubQueue to connect to. Enum values: NONE DEAD_LETTER_QUEUE TRANSFER_DEAD_LETTER_QUEUE SubQueue subscriptionName (consumer) Sets the name of the subscription in the topic to listen to. topicOrQueueName and serviceBusType=topic must also be set. This property is required if serviceBusType=topic and the consumer is in use. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean producerOperation (producer) Sets the desired operation to be used in the producer. Enum values: sendMessages scheduleMessages sendMessages ServiceBusProducerOperationDefinition scheduledEnqueueTime (producer) Sets OffsetDateTime at which the message should appear in the Service Bus queue or topic. OffsetDateTime senderAsyncClient (producer) Autowired Sets SenderAsyncClient to be used in the producer. ServiceBusSenderAsyncClient serviceBusTransactionContext (producer) Represents transaction in service. This object just contains transaction id. ServiceBusTransactionContext autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean connectionString (security) Sets the connection string for a Service Bus namespace or a specific Service Bus resource. String fullyQualifiedNamespace (security) Fully Qualified Namespace of the service bus. String tokenCredential (security) A TokenCredential for Azure AD authentication, implemented in com.azure.identity. TokenCredential 12.4. Endpoint Options The Azure ServiceBus endpoint is configured using URI syntax: with the following path and query parameters: 12.4.1. Path Parameters (1 parameters) Name Description Default Type topicOrQueueName (common) Selected topic name or the queue name, that is depending on serviceBusType config. For example if serviceBusType=queue, then this will be the queue name and if serviceBusType=topic, this will be the topic name. String 12.4.2. Query Parameters (25 parameters) Name Description Default Type amqpRetryOptions (common) Sets the retry options for Service Bus clients. If not specified, the default retry options are used. AmqpRetryOptions amqpTransportType (common) Sets the transport type by which all the communication with Azure Service Bus occurs. Default value is AmqpTransportType#AMQP. Enum values: Amqp AmqpWebSockets AMQP AmqpTransportType clientOptions (common) Sets the ClientOptions to be sent from the client built from this builder, enabling customization of certain properties, as well as support the addition of custom header information. Refer to the ClientOptions documentation for more information. ClientOptions proxyOptions (common) Sets the proxy configuration to use for ServiceBusSenderAsyncClient. When a proxy is configured, AmqpTransportType#AMQP_WEB_SOCKETS must be used for the transport type. ProxyOptions serviceBusType (common) Required The service bus type of connection to execute. Queue is for typical queue option and topic for subscription based model. Enum values: queue topic queue ServiceBusType consumerOperation (consumer) Sets the desired operation to be used in the consumer. Enum values: receiveMessages peekMessages receiveMessages ServiceBusConsumerOperationDefinition disableAutoComplete (consumer) Disables auto-complete and auto-abandon of received messages. By default, a successfully processed message is \\{link ServiceBusReceiverAsyncClient#complete(ServiceBusReceivedMessage) completed}. If an error happens when the message is processed, it is \\{link ServiceBusReceiverAsyncClient#abandon(ServiceBusReceivedMessage) abandoned}. false boolean maxAutoLockRenewDuration (consumer) Sets the amount of time to continue auto-renewing the lock. Setting Duration#ZERO or null disables auto-renewal. For \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} mode, auto-renewal is disabled. 5m Duration peekNumMaxMessages (consumer) Set the max number of messages to be peeked during the peek operation. Integer prefetchCount (consumer) Sets the prefetch count of the receiver. For both \\{link ServiceBusReceiveMode#PEEK_LOCK PEEK_LOCK} and \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} modes the default value is 1. Prefetch speeds up the message flow by aiming to have a message readily available for local retrieval when and before the application asks for one using ServiceBusReceiverAsyncClient#receiveMessages(). Setting a non-zero value will prefetch that number of messages. Setting the value to zero turns prefetch off. int receiverAsyncClient (consumer) Autowired Sets the receiverAsyncClient in order to consume messages by the consumer. ServiceBusReceiverAsyncClient serviceBusReceiveMode (consumer) Sets the receive mode for the receiver. Enum values: PEEK_LOCK RECEIVE_AND_DELETE PEEK_LOCK ServiceBusReceiveMode subQueue (consumer) Sets the type of the SubQueue to connect to. Enum values: NONE DEAD_LETTER_QUEUE TRANSFER_DEAD_LETTER_QUEUE SubQueue subscriptionName (consumer) Sets the name of the subscription in the topic to listen to. topicOrQueueName and serviceBusType=topic must also be set. This property is required if serviceBusType=topic and the consumer is in use. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern producerOperation (producer) Sets the desired operation to be used in the producer. Enum values: sendMessages scheduleMessages sendMessages ServiceBusProducerOperationDefinition scheduledEnqueueTime (producer) Sets OffsetDateTime at which the message should appear in the Service Bus queue or topic. OffsetDateTime senderAsyncClient (producer) Autowired Sets SenderAsyncClient to be used in the producer. ServiceBusSenderAsyncClient serviceBusTransactionContext (producer) Represents transaction in service. This object just contains transaction id. ServiceBusTransactionContext lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionString (security) Sets the connection string for a Service Bus namespace or a specific Service Bus resource. String fullyQualifiedNamespace (security) Fully Qualified Namespace of the service bus. String tokenCredential (security) A TokenCredential for Azure AD authentication, implemented in com.azure.identity. TokenCredential 12.5. Async Consumer and Producer This component implements the async Consumer and producer. This allows camel route to consume and produce events asynchronously without blocking any threads. 12.6. Message Headers The Azure ServiceBus component supports 25 message header(s), which is/are listed below: Name Description Default Type CamelAzureServiceBusApplicationProperties (common) Constant: APPLICATION_PROPERTIES The application properties (also known as custom properties) on messages sent and received by the producer and consumer, respectively. Map CamelAzureServiceBusContentType (consumer) Constant: CONTENT_TYPE Gets the content type of the message. String CamelAzureServiceBusCorrelationId (consumer) Constant: CORRELATION_ID Gets a correlation identifier. String CamelAzureServiceBusDeadLetterErrorDescription (consumer) Constant: DEAD_LETTER_ERROR_DESCRIPTION Gets the description for a message that has been dead-lettered. String CamelAzureServiceBusDeadLetterReason (consumer) Constant: DEAD_LETTER_REASON Gets the reason a message was dead-lettered. String CamelAzureServiceBusDeadLetterSource (consumer) Constant: DEAD_LETTER_SOURCE Gets the name of the queue or subscription that this message was enqueued on, before it was dead-lettered. String CamelAzureServiceBusDeliveryCount (consumer) Constant: DELIVERY_COUNT Gets the number of the times this message was delivered to clients. long CamelAzureServiceBusEnqueuedSequenceNumber (consumer) Constant: ENQUEUED_SEQUENCE_NUMBER Gets the enqueued sequence number assigned to a message by Service Bus. long CamelAzureServiceBusEnqueuedTime (consumer) Constant: ENQUEUED_TIME Gets the datetime at which this message was enqueued in Azure Service Bus. OffsetDateTime CamelAzureServiceBusExpiresAt (consumer) Constant: EXPIRES_AT Gets the datetime at which this message will expire. OffsetDateTime CamelAzureServiceBusLockToken (consumer) Constant: LOCK_TOKEN Gets the lock token for the current message. String CamelAzureServiceBusLockedUntil (consumer) Constant: LOCKED_UNTIL Gets the datetime at which the lock of this message expires. OffsetDateTime CamelAzureServiceBusMessageId (consumer) Constant: MESSAGE_ID Gets the identifier for the message. String CamelAzureServiceBusPartitionKey (consumer) Constant: PARTITION_KEY Gets the partition key for sending a message to a partitioned entity. String CamelAzureServiceBusRawAmqpMessage (consumer) Constant: RAW_AMQP_MESSAGE The representation of message as defined by AMQP protocol. AmqpAnnotatedMessage CamelAzureServiceBusReplyTo (consumer) Constant: REPLY_TO Gets the address of an entity to send replies to. String CamelAzureServiceBusReplyToSessionId (consumer) Constant: REPLY_TO_SESSION_ID Gets or sets a session identifier augmenting the ReplyTo address. String CamelAzureServiceBusSequenceNumber (consumer) Constant: SEQUENCE_NUMBER Gets the unique number assigned to a message by Service Bus. long CamelAzureServiceBusSessionId (consumer) Constant: SESSION_ID Gets the session id of the message. String CamelAzureServiceBusSubject (consumer) Constant: SUBJECT Gets the subject for the message. String CamelAzureServiceBusTimeToLive (consumer) Constant: TIME_TO_LIVE Gets the duration before this message expires. Duration CamelAzureServiceBusTo (consumer) Constant: TO Gets the to address. String CamelAzureServiceBusScheduledEnqueueTime (common) Constant: SCHEDULED_ENQUEUE_TIME (producer)Overrides the OffsetDateTime at which the message should appear in the Service Bus queue or topic. (consumer) Gets the scheduled enqueue time of this message. OffsetDateTime CamelAzureServiceBusServiceBusTransactionContext (producer) Constant: SERVICE_BUS_TRANSACTION_CONTEXT Overrides the transaction in service. This object just contains transaction id. ServiceBusTransactionContext CamelAzureServiceBusProducerOperation (producer) Constant: PRODUCER_OPERATION Overrides the desired operation to be used in the producer. Enum values: sendMessages scheduleMessages ServiceBusProducerOperationDefinition 12.6.1. Message Body In the producer, this component accepts message body of String type or List<String> to send batch messages. In the consumer, the returned message body will be of type `String. 12.6.2. Azure ServiceBus Producer operations Operation Description sendMessages Sends a set of messages to a Service Bus queue or topic using a batched approach. scheduleMessages Sends a scheduled message to the Azure Service Bus entity this sender is connected to. A scheduled message is enqueued and made available to receivers only at the scheduled enqueue time. 12.6.3. Azure ServiceBus Consumer operations Operation Description receiveMessages Receives an <b>infinite</b> stream of messages from the Service Bus entity. peekMessages Reads the batch of active messages without changing the state of the receiver or the message source. 12.6.3.1. Examples sendMessages from("direct:start") .process(exchange -> { final List<Object> inputBatch = new LinkedList<>(); inputBatch.add("test batch 1"); inputBatch.add("test batch 2"); inputBatch.add("test batch 3"); inputBatch.add(123456); exchange.getIn().setBody(inputBatch); }) .to("azure-servicebus:test//?connectionString=test") .to("mock:result"); scheduleMessages from("direct:start") .process(exchange -> { final List<Object> inputBatch = new LinkedList<>(); inputBatch.add("test batch 1"); inputBatch.add("test batch 2"); inputBatch.add("test batch 3"); inputBatch.add(123456); exchange.getIn().setHeader(ServiceBusConstants.SCHEDULED_ENQUEUE_TIME, OffsetDateTime.now()); exchange.getIn().setBody(inputBatch); }) .to("azure-servicebus:test//?connectionString=test&producerOperation=scheduleMessages") .to("mock:result"); receiveMessages from("azure-servicebus:test//?connectionString=test") .log("USD{body}") .to("mock:result"); peekMessages from("azure-servicebus:test//?connectionString=test&consumerOperation=peekMessages&peekNumMaxMessages=3") .log("USD{body}") .to("mock:result"); 12.7. Spring Boot Auto-Configuration The component supports 26 options, which are listed below. Name Description Default Type camel.component.azure-servicebus.amqp-retry-options Sets the retry options for Service Bus clients. If not specified, the default retry options are used. The option is a com.azure.core.amqp.AmqpRetryOptions type. AmqpRetryOptions camel.component.azure-servicebus.amqp-transport-type Sets the transport type by which all the communication with Azure Service Bus occurs. Default value is AmqpTransportType#AMQP. AmqpTransportType camel.component.azure-servicebus.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.azure-servicebus.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.azure-servicebus.client-options Sets the ClientOptions to be sent from the client built from this builder, enabling customization of certain properties, as well as support the addition of custom header information. Refer to the ClientOptions documentation for more information. The option is a com.azure.core.util.ClientOptions type. ClientOptions camel.component.azure-servicebus.configuration The component configurations. The option is a org.apache.camel.component.azure.servicebus.ServiceBusConfiguration type. ServiceBusConfiguration camel.component.azure-servicebus.connection-string Sets the connection string for a Service Bus namespace or a specific Service Bus resource. String camel.component.azure-servicebus.consumer-operation Sets the desired operation to be used in the consumer. ServiceBusConsumerOperationDefinition camel.component.azure-servicebus.disable-auto-complete Disables auto-complete and auto-abandon of received messages. By default, a successfully processed message is \\{link ServiceBusReceiverAsyncClient#complete(ServiceBusReceivedMessage) completed}. If an error happens when the message is processed, it is \\{link ServiceBusReceiverAsyncClient#abandon(ServiceBusReceivedMessage) abandoned}. false Boolean camel.component.azure-servicebus.enabled Whether to enable auto configuration of the azure-servicebus component. This is enabled by default. Boolean camel.component.azure-servicebus.fully-qualified-namespace Fully Qualified Namespace of the service bus. String camel.component.azure-servicebus.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.azure-servicebus.max-auto-lock-renew-duration Sets the amount of time to continue auto-renewing the lock. Setting Duration#ZERO or null disables auto-renewal. For \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} mode, auto-renewal is disabled. The option is a java.time.Duration type. Duration camel.component.azure-servicebus.peek-num-max-messages Set the max number of messages to be peeked during the peek operation. Integer camel.component.azure-servicebus.prefetch-count Sets the prefetch count of the receiver. For both \\{link ServiceBusReceiveMode#PEEK_LOCK PEEK_LOCK} and \\{link ServiceBusReceiveMode#RECEIVE_AND_DELETE RECEIVE_AND_DELETE} modes the default value is 1. Prefetch speeds up the message flow by aiming to have a message readily available for local retrieval when and before the application asks for one using ServiceBusReceiverAsyncClient#receiveMessages(). Setting a non-zero value will prefetch that number of messages. Setting the value to zero turns prefetch off. Integer camel.component.azure-servicebus.producer-operation Sets the desired operation to be used in the producer. ServiceBusProducerOperationDefinition camel.component.azure-servicebus.proxy-options Sets the proxy configuration to use for ServiceBusSenderAsyncClient. When a proxy is configured, AmqpTransportType#AMQP_WEB_SOCKETS must be used for the transport type. The option is a com.azure.core.amqp.ProxyOptions type. ProxyOptions camel.component.azure-servicebus.receiver-async-client Sets the receiverAsyncClient in order to consume messages by the consumer. The option is a com.azure.messaging.servicebus.ServiceBusReceiverAsyncClient type. ServiceBusReceiverAsyncClient camel.component.azure-servicebus.scheduled-enqueue-time Sets OffsetDateTime at which the message should appear in the Service Bus queue or topic. The option is a java.time.OffsetDateTime type. OffsetDateTime camel.component.azure-servicebus.sender-async-client Sets SenderAsyncClient to be used in the producer. The option is a com.azure.messaging.servicebus.ServiceBusSenderAsyncClient type. ServiceBusSenderAsyncClient camel.component.azure-servicebus.service-bus-receive-mode Sets the receive mode for the receiver. ServiceBusReceiveMode camel.component.azure-servicebus.service-bus-transaction-context Represents transaction in service. This object just contains transaction id. The option is a com.azure.messaging.servicebus.ServiceBusTransactionContext type. ServiceBusTransactionContext camel.component.azure-servicebus.service-bus-type The service bus type of connection to execute. Queue is for typical queue option and topic for subscription based model. ServiceBusType camel.component.azure-servicebus.sub-queue Sets the type of the SubQueue to connect to. SubQueue camel.component.azure-servicebus.subscription-name Sets the name of the subscription in the topic to listen to. topicOrQueueName and serviceBusType=topic must also be set. This property is required if serviceBusType=topic and the consumer is in use. String camel.component.azure-servicebus.token-credential A TokenCredential for Azure AD authentication, implemented in com.azure.identity. The option is a com.azure.core.credential.TokenCredential type. TokenCredential | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-azure-servicebus-starter</artifactId> </dependency>",
"azure-servicebus:topicOrQueueName",
"from(\"direct:start\") .process(exchange -> { final List<Object> inputBatch = new LinkedList<>(); inputBatch.add(\"test batch 1\"); inputBatch.add(\"test batch 2\"); inputBatch.add(\"test batch 3\"); inputBatch.add(123456); exchange.getIn().setBody(inputBatch); }) .to(\"azure-servicebus:test//?connectionString=test\") .to(\"mock:result\");",
"from(\"direct:start\") .process(exchange -> { final List<Object> inputBatch = new LinkedList<>(); inputBatch.add(\"test batch 1\"); inputBatch.add(\"test batch 2\"); inputBatch.add(\"test batch 3\"); inputBatch.add(123456); exchange.getIn().setHeader(ServiceBusConstants.SCHEDULED_ENQUEUE_TIME, OffsetDateTime.now()); exchange.getIn().setBody(inputBatch); }) .to(\"azure-servicebus:test//?connectionString=test&producerOperation=scheduleMessages\") .to(\"mock:result\");",
"from(\"azure-servicebus:test//?connectionString=test\") .log(\"USD{body}\") .to(\"mock:result\");",
"from(\"azure-servicebus:test//?connectionString=test&consumerOperation=peekMessages&peekNumMaxMessages=3\") .log(\"USD{body}\") .to(\"mock:result\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-azure-servicebus-component-starter |
9.8. Sound Devices | 9.8. Sound Devices Two emulated sound devices are available: The ac97 emulates an Intel 82801AA AC97 Audio compatible sound card. The es1370 emulates an ENSONIQ AudioPCI ES1370 sound card. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/sound_devices |
Chapter 13. Triggering Scripts for Cluster Events | Chapter 13. Triggering Scripts for Cluster Events A Pacemaker cluster is an event-driven system, where an event might be a resource or node failure, a configuration change, or a resource starting or stopping. You can configure Pacemaker cluster alerts to take some external action when a cluster event occurs. You can configure cluster alerts in one of two ways: As of Red Hat Enterprise Linux 7.3, you can configure Pacemaker alerts by means of alert agents, which are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. This is the preferred, simpler method of configuring cluster alerts. Pacemaker alert agents are described in Section 13.1, "Pacemaker Alert Agents (Red Hat Enterprise Linux 7.3 and later)" . The ocf:pacemaker:ClusterMon resource can monitor the cluster status and trigger alerts on each cluster event. This resource runs the crm_mon command in the background at regular intervals. For information on the ClusterMon resource see Section 13.2, "Event Notification with Monitoring Resources" . 13.1. Pacemaker Alert Agents (Red Hat Enterprise Linux 7.3 and later) You can create Pacemaker alert agents to take some external action when a cluster event occurs. The cluster passes information about the event to the agent by means of environment variables. Agents can do anything with this information, such as send an email message or log to a file or update a monitoring system. Pacemaker provides several sample alert agents, which are installed in /usr/share/pacemaker/alerts by default. These sample scripts may be copied and used as is, or they may be used as templates to be edited to suit your purposes. Refer to the source code of the sample agents for the full set of attributes they support. See Section 13.1.1, "Using the Sample Alert Agents" for an example of a basic procedure for configuring an alert that uses a sample alert agent. General information on configuring and administering alert agents is provided in Section 13.1.2, "Alert Creation" , Section 13.1.3, "Displaying, Modifying, and Removing Alerts" , Section 13.1.4, "Alert Recipients" , Section 13.1.5, "Alert Meta Options" , and Section 13.1.6, "Alert Configuration Command Examples" . You can write your own alert agents for a Pacemaker alert to call. For information on writing alert agents, see Section 13.1.7, "Writing an Alert Agent" . 13.1.1. Using the Sample Alert Agents When you use one of the sample alert agents, you should review the script to ensure that it suits your needs. These sample agents are provided as a starting point for custom scripts for specific cluster environments. Note that while Red Hat supports the interfaces that the alert agents scripts use to communicate with Pacemaker, Red Hat does not provide support for the custom agents themselves. To use one of the sample alert agents, you must install the agent on each node in the cluster. For example, the following command installs the alert_file.sh.sample script as alert_file.sh . After you have installed the script, you can create an alert that uses the script. The following example configures an alert that uses the installed alert_file.sh alert agent to log events to a file. Alert agents run as the user hacluster , which has a minimal set of permissions. This example creates the log file pcmk_alert_file.log that will be used to record the events. It then creates the alert agent and adds the path to the log file as its recipient. The following example installs the alert_snmp.sh.sample script as alert_snmp.sh and configures an alert that uses the installed alert_snmp.sh alert agent to send cluster events as SNMP traps. By default, the script will send all events except successful monitor calls to the SNMP server. This example configures the timestamp format as a meta option. For information about meta options, see Section 13.1.5, "Alert Meta Options" . After configuring the alert, this example configures a recipient for the alert and displays the alert configuration. The following example installs the alert_smtp.sh agent and then configures an alert that uses the installed alert agent to send cluster events as email messages. After configuring the alert, this example configures a recipient and displays the alert configuration. For more information on the format of the pcs alert create and pcs alert recipient add commands, see Section 13.1.2, "Alert Creation" and Section 13.1.4, "Alert Recipients" . 13.1.2. Alert Creation The following command creates a cluster alert. The options that you configure are agent-specific configuration values that are passed to the alert agent script at the path you specify as additional environment variables. If you do not specify a value for id , one will be generated. For information on alert meta options, Section 13.1.5, "Alert Meta Options" . Multiple alert agents may be configured; the cluster will call all of them for each event. Alert agents will be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes, but they will never be called on those nodes. The following example creates a simple alert that will call myscript.sh for each event. For an example that shows how to create a cluster alert that uses one of the sample alert agents, see Section 13.1.1, "Using the Sample Alert Agents" . 13.1.3. Displaying, Modifying, and Removing Alerts The following command shows all configured alerts along with the values of the configured options. The following command updates an existing alert with the specified alert-id value. The following command removes an alert with the specified alert-id value. Alternately, you can run the pcs alert delete command, which is identical to the pcs alert remove command. Both the pcs alert delete and the pcs alert remove commands allow you to specify more than one alert to be deleted. 13.1.4. Alert Recipients Usually alerts are directed towards a recipient. Thus each alert may be additionally configured with one or more recipients. The cluster will call the agent separately for each recipient. The recipient may be anything the alert agent can recognize: an IP address, an email address, a file name, or whatever the particular agent supports. The following command adds a new recipient to the specified alert. The following command updates an existing alert recipient. The following command removes the specified alert recipient. Alternately, you can run the pcs alert recipient delete command, which is identical to the pcs alert recipient remove command. Both the pcs alert recipient remove and the pcs alert recipient delete commands allow you to remove more than one alert recipient. The following example command adds the alert recipient my-alert-recipient with a recipient ID of my-recipient-id to the alert my-alert . This will configure the cluster to call the alert script that has been configured for my-alert for each event, passing the recipient some-address as an environment variable. 13.1.5. Alert Meta Options As with resource agents, meta options can be configured for alert agents to affect how Pacemaker calls them. Table 13.1, "Alert Meta Options" describes the alert meta options. Meta options can be configured per alert agent as well as per recipient. Table 13.1. Alert Meta Options Meta-Attribute Default Description timestamp-format %H:%M:%S.%06N Format the cluster will use when sending the event's timestamp to the agent. This is a string as used with the date (1) command. timeout 30s If the alert agent does not complete within this amount of time, it will be terminated. The following example configures an alert that calls the script myscript.sh and then adds two recipients to the alert. The first recipient has an ID of my-alert-recipient1 and the second recipient has an ID of my-alert-recipient2 . The script will get called twice for each event, with each call using a 15-second timeout. One call will be passed to the recipient [email protected] with a timestamp in the format %D %H:%M, while the other call will be passed to the recipient [email protected] with a timestamp in the format %c. 13.1.6. Alert Configuration Command Examples The following sequential examples show some basic alert configuration commands to show the format to use to create alerts, add recipients, and display the configured alerts. Note that while you must install the alert agents themselves on each node in a cluster, you need to run the `pcs` commands only once. The following commands create a simple alert, add two recipients to the alert, and display the configured values. Since no alert ID value is specified, the system creates an alert ID value of alert . The first recipient creation command specifies a recipient of rec_value . Since this command does not specify a recipient ID, the value of alert-recipient is used as the recipient ID. The second recipient creation command specifies a recipient of rec_value2 . This command specifies a recipient ID of my-recipient for the recipient. This following commands add a second alert and a recipient for that alert. The alert ID for the second alert is my-alert and the recipient value is my-other-recipient . Since no recipient ID is specified, the system provides a recipient id of my-alert-recipient . The following commands modify the alert values for the alert my-alert and for the recipient my-alert-recipient . The following command removes the recipient my-alert-recipient from alert . The following command removes myalert from the configuration. 13.1.7. Writing an Alert Agent There are three types of Pacemaker alerts: node alerts, fencing alerts, and resource alerts. The environment variables that are passed to the alert agents can differ, depending on the type of alert. Table 13.2, "Environment Variables Passed to Alert Agents" describes the environment variables that are passed to alert agents and specifies when the environment variable is associated with a specific alert type. Table 13.2. Environment Variables Passed to Alert Agents Environment Variable Description CRM_alert_kind The type of alert (node, fencing, or resource) CRM_alert_version The version of Pacemaker sending the alert CRM_alert_recipient The configured recipient CRM_alert_node_sequence A sequence number increased whenever an alert is being issued on the local node, which can be used to reference the order in which alerts have been issued by Pacemaker. An alert for an event that happened later in time reliably has a higher sequence number than alerts for earlier events. Be aware that this number has no cluster-wide meaning. CRM_alert_timestamp A timestamp created prior to executing the agent, in the format specified by the timestamp-format meta option. This allows the agent to have a reliable, high-precision time of when the event occurred, regardless of when the agent itself was invoked (which could potentially be delayed due to system load or other circumstances). CRM_alert_node Name of affected node CRM_alert_desc Detail about event. For node alerts, this is the node's current state (member or lost). For fencing alerts, this is a summary of the requested fencing operation, including origin, target, and fencing operation error code, if any. For resource alerts, this is a readable string equivalent of CRM_alert_status . CRM_alert_nodeid ID of node whose status changed (provided with node alerts only) CRM_alert_task The requested fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rc The numerical return code of the fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rsc The name of the affected resource (resource alerts only) CRM_alert_interval The interval of the resource operation (resource alerts only) CRM_alert_target_rc The expected numerical return code of the operation (resource alerts only) CRM_alert_status A numerical code used by Pacemaker to represent the operation result (resource alerts only) When writing an alert agent, you must take the following concerns into account. Alert agents may be called with no recipient (if none is configured), so the agent must be able to handle this situation, even if it only exits in that case. Users may modify the configuration in stages, and add a recipient later. If more than one recipient is configured for an alert, the alert agent will be called once per recipient. If an agent is not able to run concurrently, it should be configured with only a single recipient. The agent is free, however, to interpret the recipient as a list. When a cluster event occurs, all alerts are fired off at the same time as separate processes. Depending on how many alerts and recipients are configured and on what is done within the alert agents, a significant load burst may occur. The agent could be written to take this into consideration, for example by queueing resource-intensive actions into some other instance, instead of directly executing them. Alert agents are run as the hacluster user, which has a minimal set of permissions. If an agent requires additional privileges, it is recommended to configure sudo to allow the agent to run the necessary commands as another user with the appropriate privileges. Take care to validate and sanitize user-configured parameters, such as CRM_alert_timestamp (whose content is specified by the user-configured timestamp-format ), CRM_alert_recipient , and all alert options. This is necessary to protect against configuration errors. In addition, if some user can modify the CIB without having hacluster -level access to the cluster nodes, this is a potential security concern as well, and you should avoid the possibility of code injection. If a cluster contains resources with operations for which the on-fail parameter is set to fence , there will be multiple fence notifications on failure, one for each resource for which this parameter is set plus one additional notification. Both the STONITH daemon and the crmd daemon will send notifications. Pacemaker performs only one actual fence operation in this case, however, no matter how many notifications are sent. Note The alerts interface is designed to be backward compatible with the external scripts interface used by the ocf:pacemaker:ClusterMon resource. To preserve this compatibility, the environment variables passed to alert agents are available prepended with CRM_notify_ as well as CRM_alert_ . One break in compatibility is that the ClusterMon resource ran external scripts as the root user, while alert agents are run as the hacluster user. For information on configuring scripts that are triggered by the ClusterMon , see Section 13.2, "Event Notification with Monitoring Resources" . | [
"install --mode=0755 /usr/share/pacemaker/alerts/alert_file.sh.sample /var/lib/pacemaker/alert_file.sh",
"touch /var/log/pcmk_alert_file.log chown hacluster:haclient /var/log/pcmk_alert_file.log chmod 600 /var/log/pcmk_alert_file.log pcs alert create id=alert_file description=\"Log events to a file.\" path=/var/lib/pacemaker/alert_file.sh pcs alert recipient add alert_file id=my-alert_logfile value=/var/log/pcmk_alert_file.log",
"install --mode=0755 /usr/share/pacemaker/alerts/alert_snmp.sh.sample /var/lib/pacemaker/alert_snmp.sh pcs alert create id=snmp_alert path=/var/lib/pacemaker/alert_snmp.sh meta timestamp-format=\"%Y-%m-%d,%H:%M:%S.%01N\" pcs alert recipient add snmp_alert value=192.168.1.2 pcs alert Alerts: Alert: snmp_alert (path=/var/lib/pacemaker/alert_snmp.sh) Meta options: timestamp-format=%Y-%m-%d,%H:%M:%S.%01N. Recipients: Recipient: snmp_alert-recipient (value=192.168.1.2)",
"install --mode=0755 /usr/share/pacemaker/alerts/alert_smtp.sh.sample /var/lib/pacemaker/alert_smtp.sh pcs alert create id=smtp_alert path=/var/lib/pacemaker/alert_smtp.sh options [email protected] pcs alert recipient add smtp_alert [email protected] pcs alert Alerts: Alert: smtp_alert (path=/var/lib/pacemaker/alert_smtp.sh) Options: [email protected] Recipients: Recipient: smtp_alert-recipient ([email protected])",
"pcs alert create path= path [id= alert-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert create id=my_alert path=/path/to/myscript.sh",
"pcs alert [config|show]",
"pcs alert update alert-id [path= path ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert remove alert-id",
"pcs alert recipient add alert-id value= recipient-value [id= recipient-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert recipient update recipient-id [value= recipient-value ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert recipient remove recipient-id",
"pcs alert recipient add my-alert value=my-alert-recipient id=my-recipient-id options value=some-address",
"pcs alert create id=my-alert path=/path/to/myscript.sh meta timeout=15s pcs alert recipient add my-alert [email protected] id=my-alert-recipient1 meta timestamp-format=\"%D %H:%M\" pcs alert recipient add my-alert [email protected] id=my-alert-recipient2 meta timestamp-format=%c",
"pcs alert create path=/my/path pcs alert recipient add alert value=rec_value pcs alert recipient add alert value=rec_value2 id=my-recipient pcs alert config Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2)",
"pcs alert create id=my-alert path=/path/to/script description=alert_description options option1=value1 opt=val meta timeout=50s timestamp-format=\"%H%B%S\" pcs alert recipient add my-alert value=my-other-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=value1 Meta options: timestamp-format=%H%B%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient)",
"pcs alert update my-alert options option1=newvalue1 meta timestamp-format=\"%H%M%S\" pcs alert recipient update my-alert-recipient options option1=new meta timeout=60s pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: timestamp-format=%H%M%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s",
"pcs alert recipient remove my-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Alert: my-alert (path=/path/to/script) Description: alert_description Meta options: timestamp-format=\"%M%B%S\" timeout=50s Meta options: m=newval meta-option1=2 Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s",
"pcs alert remove my-alert pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-alertscripts-HAAR |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.